< Back to the archive

Like what you see? Subscribe here and get it every week in your inbox!

Issue #202 - January 22, 2023

Here are the top threads of the week, happy reading!

Top comment by ignoramous

ex-AOSP dev here

Android and ChromiumOS are likely the most trustable computing platforms out there; doubly so for Android running on Pixels. If you don't prefer the ROM Google ships with, you can flash GrapheneOS or CalyxOS and relock the bootloader.

Pixels have several protections in place:

- Hardware root of trust: This is the anchor on which the entire TCB (trusted computing base) is built.

- Cryptographic verification (verified boot) of all the bootloaders (IPL, SPL), the kernels (Linux and LittleKernel), and the device tree.

- Integrity verification (dm-verity) of the contents of the ROM (/system partition which contains privileged OEM software).

- File-based Encryption (fscrypt) of user data (/data partition where installed apps and data go) and adopted external storage (/sdcard); decrypted only with user credentials.

- Running blobs traditionally run in higher exception levels (like ARM EL2) in a restricted, mutually untrusted VM.

- Continued modularization of core ROM components so that they could be updated just like any other Android app, ie without having to update the entire OS.

- Heavily sandboxed userspace, where each app has very limited view of the rest of the system, typically gated by Android-enforced permissions, seccomp filters, selinux policies, posix ACLs, and linux capabilities.

- Private Compute Core for PII (personally identifiable information) workloads. And Trusty Execution Environment for high-trust workloads.

This is not to say Android is without exploits, but it seems it is most further ahead of the mainstream OSes. This is not a particularly high bar because of closed-source firmware and baseband, but this ties in generally with the need to trust the hardware vendors themselves (see point #1).

Top comment by jsenn

* The Art of Probability by Hamming. An opinionated, slightly quirky text on probability. Unlike the text used in my university course its explanations were clear and rigourous without being pedantic. The exercises were both interesting and enlightening. The only book in this list that taught skills I've actually used in the real world.

* Calculus by Spivak. This was used in my intro calculus course in university. It's very much a bottom-up, first-principles construction of calculus. Very proof-based, so you have to be into that. Tons of exercises, including some that sneakily introduce pretty advanced concepts not explicitly covered in the main text. This book, along with the course, rearranged by brain. Not sure how useful it would be for self-study though.

* Measurement by Lockhart. I haven't read the whole thing, but have enjoyed working through some of the exercises. A good book for really grokking geometric proofs and understanding "mathematical beauty", rather than just cranking through algebraic proofs step by step.

* Naive Set Theory by Halmos. Somewhat spare, but a nice, concise introduction to axiomatic set theory. Brings you from nothing up to the Continuum Hypothesis. I read this somewhere around my first year in university and it was another brain-rearranger.

Top comment by seanhunter

My pet peeve is the labeling of these things.

1)I'm going to give you an easy one to start. You see a toggle switch. It is set to on (probably - the little colour bar in the switch is coloured in).

It is labeled "Disable fnurbification".

Okay now what? Does "on" mean I'm going to be fnurbified? Does switching the switch disable the fnurbification so I actually have to switch it to "off"? No that's crazy. "on" means "disabled", cognative dissonance aside.

2) You see a toggle switch. It is set to on like before. It is labeled "Disable fnurbification".

We learned before that "on" meant "disabled", but that filled us with a vague sense of unease. For whatever reason we try toggling the switch. The text changes to just "Fnurbification"

Okay really now what? Is my fnurbification on? You try flicking the switch back. The colour fills in and the label changes to "Disable fnurbification" again. Okay what are we supposed to do?

What's happened is the designer has read a post on medium about accessibility and that screen readers don't read out the colour of the filled in part of a toggle switch, and has decided to help by changing the label when the state of the switch changes.

The problem is now the label could either be describing the current state or be describing what happens when you flip the switch. And there's really no way of knowing. I've seen this very often with the UX for boolean selectors where they use things like buttons rather than toggle switches. Does pressing the button do the thing it says on the label or does the label describe where we are now and pressing the button will reverse that? No way to be sure.

Postscript: Notice that whatever you decide is correct in the second case could change what you would do in the first case if the first type of selector is one that would change label when you toggle it.

Top comment by pavlov

I'm 42 and last year moved back to my home country. We wanted to throw a housewarming party but getting enough people in one place on a specific day at a specific time seemed like an impossible challenge. Everybody had busy schedules for their precious August weekends anyway.

We ended up doing a full-weekend party where people could drop by whenever they can. The first guests came on Friday around 6 pm and the last ones left on Sunday at 11 pm. In the end people were spread out evenly. Some came with kids in the day, others alone in the evening. This way there was really time to sit down and talk with old friends whom I hadn't seen in years or even decades.

For food, we ended up having ingredients in the fridge for a few quick foods like Vietnamese rice paper rolls, and we'd make it together with new guests if they were hungry. It worked out fine.

Of course dedicating the entire weekend to a party is a big commitment, but I think it actually reduced the level of stress compared to trying to do a traditional "dense" party on Saturday night.

Top comment by azemetre

While not for beginners, if you'd like to learn rust I recently finished "Command line Rust" [1].

It was my first introduction to rust and the book was quite enjoyable. It starts off with teaching you the very basics of a command line (what it means to exit, true, or false, etc) and each chapter has you recreate a popular command line tool (like grep, cal, tail, wc) while introducing a new rust concept.

The book also does TDD, test driven design, by first teaching you how to create these tests then in subsequent chapters having the tests prewritten for you.

It's definitely worth a look, the author has a great writing style as well that isn't as monotonous as most programming books I've read.

[1] https://www.oreilly.com/library/view/command-line-rust/97810...

Top comment by floren

I worked for Sandia. Pay is pretty good by almost any standards except FAANG. The glory days where every staff member got a real office with a real door are over (shared offices are the norm) but it's still a pretty decent work environment.

Things don't move fast, as another commenter said. In my area of work, projects tended to last 1-3 years and you'd be on several projects at any given time. In general, it is ICs rather than managers who run the projects. Your manager might say "Bob over in 9876 has a neat project that could use somebody like you, send him an email if you're interested".

You have to acknowledge that the core mission of the DOE National Labs is nuclear weapons. You might not ever come in contact with the mission, but it is there. They have strong HPC programs--because HPC as we know it is basically driven by the need to simulate nuclear weapons. Some people have moral objections to this, and that's fine!

I thought it was a good place to work, all in all.

Edit: I'd like to stress that probably the biggest advantage of the labs is the opportunity for self-directed work. If you can convince somebody (external sponsors, internal R&D funding committees) to give you money, you can work on just about anything. If you can't get funding of your own, you are still more or less able to choose what you work on.

Your work environment will depend highly on which group you're in. Some groups look like a university department without the students: you work in the unclassified area, you publish papers, you can even open-source software (with some effort). Other people spend their whole day in a windowless SCIF working on very sensitive stuff which they can never, ever discuss outside of a SCIF -- but while their public visibility is nil, their impact is arguably greater.

Top comment by mooreds

I like the advice here: https://blog.pragmaticengineer.com/advice-for-junior-softwar... (didn't write it, but it jives with my experience).

I also wrote something here on how to stand out: https://letterstoanewdeveloper.com/2022/09/19/ways-to-stand-...

Finally, you ask:

> Is it better to get any job than keep searching for the job I'd be most suited for?

That depends on your financial situation and emotional runway, but my advice would be that in general it is far easier to get a job once you have a job. I wouldn't advise taking a job digging ditches (unless you need the $$$), but if you can find something that is related to your chosen profession, isn't clearly toxic, and is a full time paying job, take it.

If the company is good, you'll have the ability to grow internally and you'll be a known quantity.

If the company is not great, you'll at least have some experience to put on your resume. You may even be able to help improve the company. At worst you'll have a salary and title and be able to job search from there.

Top comment by gensym

I know lots of people who have started their careers over in their 30s. My mom, for instance, went to school to become a respiratory therapist. My dad was in his 30s when he learned to be a tool and die maker. Both of them needed formal training.

I've hired people who have switched careers by going to dev bootcamps, but their situation was different because they hadn't had a period of downtime. In general, though, I love hiring career switchers. They've demonstrated the ability to learn and adapt rather than just let momentum carry them on.

As someone who hires software engineers, here's my perspective on this question:

> -Will tech co’s ever consider hiring someone like me?

Based on your description right now, I probably wouldn't hire you. You sound like you are directionless. I love mentoring new engineers and cultivating their growth, but I need them to have the focus and the drive to do that growth, and the description you have here doesn't paint a picture of someone who would likely be successful at that.

However, if I were to see the resume of someone who spent their early 30s directionless and then figured out what they wanted to do and took the serious steps to do that with a record of setting ambitious but reasonable goals for themselves and hitting them, along with developing the beginning of the technical skills needed for the role, I'd be really excited to consider them.

I suggest you hire a career coach to explore your options; you're younger than you think. I wish you the best in figuring this out.

Top comment by alexb_

I've often wondered why a service doesn't exist that allows you to rent out your graphics card for the large data processing needed for training models. Like mining bitcoin except you are doing something actually useful and getting paid actual money for it. Example:

- Company Alpha needs $40,000,000 worth of cloud computing for their training model - Company Beta provides them said cloud computing for $30,000,000 from their pool of connected graphics cards - Individuals can connect their computers to the Company Beta network and receive compensation for doing so. In total $20,000,000 is distributed.

Company Alpha gets their cloud computing done for cheap, Company Beta pockets the $10,000,000 difference for running a network, the individuals make money with their graphics cards, except this time it's actual United States Dollars. What am I missing here that would make this type of business unfeasible?

Top comment by guptaneil

I don’t have any unique insights into TikTok nor do I use the app, but TikTok has several product advantages over those other platforms:

1. They only have a queue of videos. That means there is one video playing and they know exactly what the next N videos are going to be, so they can start buffering them early

2. The video plays are high intent. If you launch TikTok, you are 100% going to watch videos. This is not true for Reddit or Twitter, making it harder for them to preemptively buffer content or over-optimize their architecture just for video playback.

3. As you noted, the videos are lower quality and shorter than YouTube or Netflix videos. They also are exclusively optimized for mobile, so don’t need to worry about streaming videos larger than a phone screen.

4. Most videos in your feed might be a similar style. This one is a stretch but I wouldn’t be shocked if TikTok uses some ML to extrapolate highly compressed videos early in the buffer.

5. Depending on who you believe, ByteDance might have financial backing and motivations that make it easier to throw more money at this problem than its American competitors.