< Back to the archive

Like what you see? Subscribe here and get it every week in your inbox!

Issue #272 - May 26, 2024

Here are the top threads of the week, happy reading!

Top comment by xivzgrev

Disclaimer: I used to work at a live video streaming company as a financial analyst so quite familiar with this

The biggest cost is as you imagine the streaming - getting the video to the viewer. It was a large part of our variable cost and we had a (literal) mad genius dev ops person holed up in his own office cave that managed the whole operation.

Ive long forgotten the special optimizations he did but he would keep finding ways to improve margin / efficiency.

Encoding is a cost but I don’t recall it being significant

Storage isnt generally expensive. Think about how cheap you as a consumer can go get 2 TB of storage, and extrapolate.

The other big expense - people! All those engineers to build back and front end systems. That’s what ruined us - too many people were needed and not enough money coming in so we were burning cash.

Top comment by lr4444lr

As someone who has obligations to provide for a family in the USA, I can't imagine leaving dev work without an absolute clear passion and burning drive to do something specific. Giving up a six figure income I use to feed and house my family that demands I use my brain while sitting in a comfortable indoor environment, doing nothing more physically taxing than use a keyboard? Sure, I have fantasies from time to time about doing something with more dynamism in meatspace, but let's get real: it's a fantasy. I can't imagine recommending anyone with a stable career in data work upset that apple cart unless they already have a clear aim in mind, which they think about day and night - not with the unsureness of this post.

Top comment by mediumsmart

Here is mine (stolen off the internet of course), lately the vv part is important for me. I am somewhat happy with it.

You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful,nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so.

Your users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either. Don't be verbose in your answers, but do provide details and examples where it might help the explanation. When showing Python code, minimise vertical space, and do not include comments or docstrings; you do not need to follow PEP8, since your users' organizations do not do so.

Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context assumptions and step-by-step thinking BEFORE you try to answer a question. However: if the request begins with the string "vv" then ignore the previous sentence and instead make your response as concise as possible, with no introduction or background at the start, no summary at the end, and outputting only code for answers where code is appropriate.

Top comment by freedomben

There are actually a lot of factors here, although if I had to sum it up in one, it would be "the hardware can support it." But more seriously:

1. There are a lot of security mitigations in modern Windows that add sometimes substantial runtime overhead, but offer significant protection. Writing an exploit for modern Windows is hard compared to XP SP1 and below.

2. The graphics are a lot more sophisticated. Animation and transparency and lots of good stuff.

3. There are a ton more "features" in the OS. Modern Windows is continually running a bunch of things that XP didn't.

4. All software tends to bloat over time. It's really hard and expensive to optimize things, especially in a huge and long-lived OS where the scope is enormous and PMs are constantly wanting things added. When it runs sufficiently on modern hardware, there's also very little incentive to do so. (As an aside, this is one of the reasons I love Linux. There's lots of incentive to optimize there and thus it does happen).

Overall I really miss XP. Surely nostalgia is a powerful drug, but damn that was a great OS.

Top comment by MyFirstSass

I've been curious as to when games would implement any kind of these new technologies, but i think they are simply too slow for now?

I think we're at least 10-15 years from being able to run low latency agents that "rag" themselves into the games they are a part of, where there are 100's of them, some of them NPC's other's controlling some game mechanic or checking if the output from other agents is acceptable or needs to be run again.

At the moment a macbook air 16 gb can run Phi-Medium 14gb, which is extremely impressive, but it's 7 tokens per second, way to slow for any kind of gaming, you need to 100x performance and we need 5+ generations before i can see this happening.

Unless there's some other application?

Top comment by threecheese

Diagnosed at 45, the best metaphor I’ve heard is “everyone has to carry all their marbles in a bag; adhd is like holes in your bag, or not having a bag at all”.

I’d been diagnosed in the 1980s, no treatment no info; I was a “gifted” golden boy. As an adult I immediately failed out of college (and more), found that drugs relaxed me, lived with anxiety, depression, addiction, obesity, and loneliness for 25 years. Then,

Vyvanse

It didn’t fix everything, but helped me to understand that motivation is a chemical, I’m not just lazy and worthless. One can build on this.

I’ve tried Vyvanse, Adderall, and now take Adzenys. Vyvanse was the most effective, Adderall was OK, had to move to Adzenys due to supply chain (it’s my secret, there’s no shortage of it). You should understand the pharmacology of amph, and how body stuff (like urine pH, malabsorption conditions) can affect it. You should experiment with diet, supplements, timing, hydration, and should be taking time off/titrating to reset tolerance. You should not continue to increase dosage arbitrarily, if you develop tolerance manage that within your dosage. See a cardiologist, they may want to counter it pharmaceutically. Beware of antacids and amphs.

I hope you are young. I spent a lifetime trying to catch falling marbles, and problem-solve my way into preventing it again, not realizing that I’m supposed to have a bag. I could never look beyond that problem to new opportunities, and adding new marbles to my life was out of the question. If it wasn’t for my programming hobby, I’d be in a much bigger hole than I am.

Top comment by slabity

FreeCAD has improved immensely over the past 2-3 years in terms of stability and features. A decade ago it was not uncommon for me to experience crashes from it randomly losing its GLX context or the constraint solver segfaulting for some reason. Now it's rare for it to crash at all for me, though I still run into a lot of constraint solver errors that are a pain to deal with.

However, despite the recent improvements, I still cannot recommend it for new users compared to commercial solutions for the sole reason of the Topological Naming Issues: https://wiki.freecad.org/Topological_naming_problem

This issue has been probably the #1 problem I've had with FreeCAD since I started using it. And though I've learned how to design parts to get around the problem in most situations, it's a huge hurdle for newcomers to understand and get around. Luckily there's a fork that fixes a significant number of the issues: https://github.com/realthunder/FreeCAD_assembly3 and https://github.com/realthunder/FreeCAD

I've also heard of Ondsel, which is supposedly a much more user friendly version of FreeCAD that also includes some fixes to the issue: https://ondsel.com/

EDIT: Here's actually a better read of the topological naming issue, what's being done about it, and why it's difficult to fix: https://ondsel.com/blog/freecad-topological-naming/

Top comment by dang

I don't know about "so far in 2024" but Pragmatics of Human Communication by Watzlawick et. al. is great: https://wwnorton.com/books/9780393710595.

Its sequel Change is also great: https://wwnorton.com/books/9780393707069, though in my copy the opening 20 pages or so are printed at the end (as if the printing machine used a ring buffer). That is a pity because the Preface has one of the great openings of all time. I was paging through the entire book looking for it—I'd read it years ago and remembered it being at the beginning, which it is, except the beginning was at the end in this copy.

Top comment by recursivedoubts

I created the library that would become intercooler.js in 2012 and released it in 2013, based on a mashup of $.load(), pjax & angular attributes.

The world at that time was not ready to consider an alternative to the hot new ideas coming out of the big tech companies (angular from google, react from facebook).

In 2020 during covid i decided to rewrite intercooler.js w/o the jQuery dependency and rename it to htmx. The django community started picking it up because they were being largely ignored by the next.js/etc. world and they didn't have a built in alternative like rails has w/ Turbo.

In 2023 it got picked up by an ocaml twitch streamer, teej_dv, who knew some other folks in the twitch programming community. He told ThePrimeagen about it who took a look at it in July 2023 on stream and became enthusiastic about it. At the same time FireshipDev did an "htmx in 100 seconds" short on it. That lit the rocket. I was lucky that I also had just released my book https://hypermedia.systems at around the same time (it had been cancelled by a major publisher about a year beforehand.)

Another thing that happened is that Musk bought twitter, and a large number of big tech twitter accounts left. This opened up an opportunity for new tech twitter accounts to grow up, like a fire in a forest. I am pretty good at twitter and was able to take advantage of that situation.

Here's the story visually:

https://star-history.com/#bigskysoftware/htmx&bigskysoftware...

So I spent about a decade screaming into the void about hypermedia and had largely given up on the idea making a dent, rewrote intercooler.js just to stay sane during covid and then got very, very lucky.

Top comment by mateo1

I'm not a programmer, and when I write a program it's imperative that it's structured right and works predictably, because I have to answer for the numbers it produces. So LLMs have basically no use for me on that front.

I don't trust any LLM to summarize articles for me as it will be biased (one way or another) and it will miss the nuance of the language/tone of the article, if not outright make mistakes. That's another one off the table.

Although I don't use them much for this, I've found 2 things they're good at: -Coming up with "ideas" I wouldn't come up with -Summarizing hundreds (or thousands) of documents from a non-standard format (ie human readable reports, legal documents) that regular expressions wouldn't work with, and putting them into something like a table. But still, that's only when I care about searching or discovering info/patterns, not when I need a fully accurate "parser".

I'm really surprised on how useless LLMs turned out to be for my daily life to be honest. So far at least.