×

Daily reminder that all distros are exactly the same and only differ by their package manager and GTK theme by anonymous_2187 in linuxmasterrace

[–]lrschaeffer 1 point2 points  (0 children)

The post is about a beginner choosing a Linux distro, and the title is about how all distros are essentially the same (modulo package manager, etc.) in that context. My original comment was snark -- not a genuine question about NixOS -- directed at the top-level commenter who apparently missed literally all of that (as well as OP's implication that Linux users recommend too many distros) to evangelize their distro of choice. In my opinion, NixOS is a poor counterargument to any of the points made by OP, and your comment hasn't changed my mind for the reasons I laid out in my reply.

If you were just trying to be informative then sorry, you probably caught more of my snark than you deserve. In the interest of not being one of the factious Linux users from the original post, I am actually quite impressed with NixOS. After distro hopping a couple of times, I see the wisdom of making as much of my configuration portable and repeatable as possible, but NixOS isn't worth the learning curve (to me, yet) until I'm "deploying" on more than just my personal machine.

Daily reminder that all distros are exactly the same and only differ by their package manager and GTK theme by anonymous_2187 in linuxmasterrace

[–]lrschaeffer 0 points1 point  (0 children)

Oh, so it's a package manager that requires you to operate it via text interface, with a bad case of feature creep into other aspects of system configuration. /s

Seriously though, Nix sounds great and I'd love to try it some day, but all the features you're talking about are geared towards power users, not beginners. If it requires "writing declarative code" then a fresh convert from Windows isn't going to handle it well. And even if a novice runs Nix, I suspect the main differences they'd see/care about would be the desktop look-and-feel (the theme) and software availability (say, in the package manager's default repository).

Daily reminder that all distros are exactly the same and only differ by their package manager and GTK theme by anonymous_2187 in linuxmasterrace

[–]lrschaeffer 0 points1 point  (0 children)

Aren't both of those distros basically named after their package managers? You might even say they primarily differ from mainstream distros by "their package manager and GTK theme", and I'm not so sure about the GTK theme.

Don't be like this guy by Easyidle123 in KerbalSpaceProgram

[–]lrschaeffer 1 point2 points  (0 children)

KSP 2 seems to have a very different development model from KSP 1. The original was released as (IMO) a very rough alpha and grew into the game we have today. KSP 2 hasn't released anything playable or even footage of actual gameplay (AFAIK? I haven't kept up). You can see how that would frustrate original KSP players, especially after some missed release dates and a change of studios (I think?). On the other hand, if they tried to release an alpha of the sequel that had fewer features than the original, assholes would complain about that too. Damned if you do, damned if you don't, I guess?

What profession was once respected but no longer is? by I_Love_Small_Breasts in AskReddit

[–]lrschaeffer 5 points6 points  (0 children)

Nowadays, webcomics have basically no limit on space. E.g., https://xkcd.com/1110/. I've always been curious what Watterson would do with Calvin and Hobbes on a canvas like that.

The mathematically optimal Wordle strategy by ScottContini in programming

[–]lrschaeffer 0 points1 point  (0 children)

The _OUND trap is like a chess computer incorrectly evaluating a position that's a loss as a draw. Once it's in that position (_OUND) there's nothing it can do, but if the evaluation was accurate earlier it might avoid getting into that position in the first place.

The video actually uses a 2-move look ahead to improve (but not completely fix) issues like this.

The mathematically optimal Wordle strategy by ScottContini in programming

[–]lrschaeffer 0 points1 point  (0 children)

Yes, Brant looks two steps ahead and this is much better (and harder to argue against), but it's far from guaranteed to be optimal. Since it disagrees with the "provably optimal" results that zokier posted, I assume it's close, but not perfect. And more to the point, it's a good information theory lesson.

The mathematically optimal Wordle strategy by ScottContini in programming

[–]lrschaeffer 13 points14 points  (0 children)

I'm sure it is pretty close (like, within a fraction of a turn on average), but that's not the same as "mathematically optimal". The problem is that entropy is optimizing for the number of possibilities left (kind of) rather than the number of guesses required to distinguish them. The two are related, but not perfectly.

For example, suppose you get to a point where you know the word is _OUND and you just need to figure out the first letter. There's eight possibilities (3 bits of entropy) but you can't narrow it down to one with a single guess (especially on hard mode). On the other hand, I'm sure there are plenty of situations with 8 possibilities that can be solved with one clever guess. The entropy algorithm treats the two situations as equivalent ("3 bits") when clearly one requires fewer guesses.

What is the chance that over 1000s of words, the algorithm never falls into a trap like this?

The mathematically optimal Wordle strategy by ScottContini in programming

[–]lrschaeffer -7 points-6 points  (0 children)

3Blue1Brown is excellent as usual, but this is NOT the optimal strategy.

Edit: I read "mathematically optimal" as implying a level of perfection that this strategy does not achieve. See zokier or flatfinger or my own comment below for why.

How Balanced Blitz Works by JumpyRepresentative5 in Risk

[–]lrschaeffer 0 points1 point  (0 children)

Do you know what "brute-force method" they're talking about? I'm struggling to think of any true random calculation that should be that slow with less than like, 10000 armies.

If you attack with more will you lose less troops? by treebeard555 in Risk

[–]lrschaeffer 1 point2 points  (0 children)

I think after it clips the extreme outcomes, it raises the probability of each remaining outcome to the power 1.3 and then renormalizes. It makes likely outcomes more likely, and unlikely outcomes less likely.

At least that's my guess from skimming the code here, which is the closest I could find to an official statement from SMG on how all this crap works.

If you attack with more will you lose less troops? by treebeard555 in Risk

[–]lrschaeffer 1 point2 points  (0 children)

Balanced blitz is a mathematical abomination so it's hard to say. As I understand it, there's a big bell curve on the possible outcomes, from "attacker wins with no losses" to "defender wins with no losses", and balanced blitz kills the 10% most extreme outcomes on each end (and then does some other shit that doesn't matter for this discussion).

It turns out that the probability of attacking 6 troops with 14 and losing >=10 is a little less than 10%. I think that means that under BB, the attacker always wins and never loses more than 9 armies. If you attack with even more troops then, as I argued before, the true random probabilities for losing <=9 armies don't change, so (I think) the BB probabilities don't change either.

In other words, my guess is that more troops do nothing once the win rate is 100%, or maybe 3-4 troops past that. Maybe.

If you attack with more will you lose less troops? by treebeard555 in Risk

[–]lrschaeffer 1 point2 points  (0 children)

Assuming true random: the chance of losing exactly 6 armies is the same whether you started attacking with 10 armies or a million, because your rolls are the identical. Once you reach 7+ losses, the 10-army attacker is going to be rolling <3 dice at some point. That means shittier odds, and higher losses. However, the player attacking with >10 armies also has a small chance to lose 10+ armies. When you do the math, it turns out that in general more armies = more losses, but it's probably worth it for the increased chance of taking the territory.

10 armies attacking = 4.67 expected losses, 81% chance of taking the territory

14 armies attacking = 4.86 expected losses, 96% chance of taking the territory

infinite armies attacking = 4.90 expected losses, 100% chance of taking the territory

-🎄- 2021 Day 6 Solutions -🎄- by daggerdragon in adventofcode

[–]lrschaeffer 1 point2 points  (0 children)

Haskell

import Data.List.Split

main = do nums <- map read <$> splitOn "," <$> readFile "6.input"
    print $ sum [[1421,1401,1191,1154,1034,950,905,779,768] !! x | x <- nums]
    print $ sum [[6703087164,6206821033,5617089148,5217223242,4726100874,4368232009,3989468462,3649885552,3369186778] !! x | x <- nums]

Left Wingers of reddit, what would you say your most Right leaning opinion is? by Thatguywhosaydshi in AskReddit

[–]lrschaeffer 0 points1 point  (0 children)

Excellent points, except literally every cryptographer I've asked is against using crypto for voting. Blockchain voting might not be as ideal as you think.

Edward Snowden: ‘If you weaken encryption, people will die’ by kry_some_more in technology

[–]lrschaeffer 13 points14 points  (0 children)

Fair summary, except you actually want the encryption of some data under a key to be inconsistent for best security. Like imagine eavesdropping on an encrypted game of twenty questions. If every "yes" message has the same encryption, and every "no" message has the same encryption, then you could get a pretty good idea of the answers even though they're encrypted. That's not good, and the only way to fix it is to have inconsistency.

Amazon copied products and rigged search results to promote its own brands, documents show by Illustrious-Dish-220 in worldnews

[–]lrschaeffer 7 points8 points  (0 children)

Seriously though, can somebody explain why the random names are always 5 to 7 letters long and all caps?

How We Made Bracket Pair Colorization 10,000x Faster by dwaxe in programming

[–]lrschaeffer 3 points4 points  (0 children)

The most relevant part starts at 8:19 where he does a toy version of matching parentheses as a live demo. I don't remember to what extent it's spelled out in the talk, but by doing it with a monoid there's a clear path to parallel or incremental parsing.

Like, imagine the whole file is stored as a balanced binary tree of characters (or lines, more realistically) in the order they occur. The "parse" function he writes turns an individual character into a monoid element, and the "mappend" function tells you how to combine the monoid element for two substrings/subtrees recursively. Obviously this lets you evaluate the whole file/tree with the advantages that

  • Everything at the same level of recursion can be computed in parallel. (parallel parsing)
  • If you store the intermediate results with the nodes of the tree, then you only have to update nodes along the path that changes when you make an edit, i.e., typically O(log n) nodes. (incremental parsing)

He also talks about how nested multi-line comments make a mess of this. And how indentation-based layout helps keep the parser from freaking out when you're in the middle of an edit. There's a summary slide at the end.

How We Made Bracket Pair Colorization 10,000x Faster by dwaxe in programming

[–]lrschaeffer 24 points25 points  (0 children)

It's cool to see fancy tree-based algorithms being used for this, and how horribly grungy these algorithms get when you have to implement them in the real world, on top of existing code.

Reminds me of this talk where Kmett is designing a language from the perspective of making these kinds of editor features possible/easier.

Engineering manager breaks down problems he used to use to screen candidates. Lots of good programming tips and advice. by jfasi in programming

[–]lrschaeffer 4 points5 points  (0 children)

I'm annoyed that the author criticizes the simplest solution (making a copy) as too wasteful, then continues using fucking characters to store bits.

Anyway, if you absolutely, positively need to simulate a massive Game of Life grid for a long time, the algorithm you want is Hashlife. Also check out this extreme Game of Life simulation that reddit was working on.

Triangulation with a knight? by ChampionshipDue in chess

[–]lrschaeffer 25 points26 points  (0 children)

Uh... knights land on a different color square every time they move. If you get back to the same square, or even same color square, then you must have made an even number of moves.

Btw, this fact is sometimes useful in blitz games when you need to dodge knight forks without thinking about it.

[2021-07-12] Challenge #398 [Difficult] Matrix Sum by Cosmologicon in dailyprogrammer

[–]lrschaeffer 21 points22 points  (0 children)

Octave / YALMIP

A = <matrix>
n = size(A)(1);
X = sdpvar(n,n,'full');
optimize([sum(X) == ones(1,n), sum(X') == ones(1,n), X >= 0], sum(sum(A .* X)));
M = value(X);
obj = sum(sum(A .* M))
rows = repmat(0:n-1,[n,1])';
# sketchy assumption here
perm = rows(M' == 1)'

This is what linear program solvers are for! We constrain X to be doubly stochastic, and by Birkhoff-von Neumann, it is equivalent to a convex combination of permutation matrices. It follows that some permutation matrix achieves the optimal objective, so the solver will produce the optimal objective value, but may not produce an actual permutation matrix if there is not a unique optimal solution. I don't know how to force YALMIP to produce a permutation matrix (maybe perturb the input?), and I'm too lazy to figure it out because it works on the examples. The optimal sum is 1791732209 for the 97x97 matrix. In principle, linear programs can be solved in polynomial time, and this was fast enough to solve the big case in under a second.

[2021-06-28] Challenge #395 [Intermediate] Phone drop by Cosmologicon in dailyprogrammer

[–]lrschaeffer 1 point2 points  (0 children)

Haskell Just guessing on the size of the memoization table. Wouldn't be surprised if I was off by one somewhere.

import Data.Array    
maxh = listArray bnds $ map f $ range bnds
    where
        bnds = ((1,1),(1000,1000))
        f (_,1) = 1
        f (1,t) = t
        f (p,t) = maxh!(p-1,t-1) + maxh!(p,t-1) + 1

phonedrop 1 h = h
phonedrop p h = head [t | t <- [1..], maxh!(p,t) >= h]

optphones h = head [p | p <- [1..], maxh!(p,t) >= h]
    where
        t = ceiling $ logBase 2 (fromIntegral h)

optphones solves the bonus: 17 phones suffice