< Back to the archive

Like what you see? Subscribe here and get it every week in your inbox!

Issue #172 - June 26, 2022

Here are the top threads of the week, happy reading!

Top comment by corrral

> Take-home technical assignment (~4h) or similar at candidate's choosing

If I can get a similarly-paying job at a place that doesn't do this, I'll skip you.

Many seniors (actual seniors, not 3-years-of-experience seniors) have a network and can say "hey I'm looking" and instantly have multiple options that won't have them do more than talk to the team and manager for an hour or two. If that.

Top comment by l72

I have started doing something completely different than using bookmarks. I set up yacy[1] on a personal, internal server at my home, which I can access from all my devices, since they are always on my wireguard vpn.

Yacy is actually a distributed search engine, but I run in 'Robinson mode' as a private peer, to keep it isolated, as I just want a personal search of only sites I have indexed.

Anytime I come across something of interest, I index it with yacy, using a a depth of 0 (since I only want to index that one page, not the whole site). This way, I can just go to my search site, and search for something, and anything related that I've indexed before pops up. I found this works way better than trying to manage bookmarks with descriptions and tags.

Also, yacy will keep a cache of the content which is great if the site ever goes offline or changes.

If I need to browse, I can go use yacy's admin tools to see all the urls I have indexed.

I have been using this for several months and I am using this way more than I ever used my bookmarks.

[1] https://yacy.net/

Top comment by robcohen

I collect a lot of certs for fun, but most have not been helpful. I still like to collect them anyway.

I think OSCP was the most legitimately useful in tech https://www.offensive-security.com/pwk-oscp/

PMP has been useful to take on Project Manager roles, but really PrM roles aren't all that exciting to begin with. Still helps when you want to run your own projects.

I'm currently studying to be a certified parliamentarian from the National Association of Parliamentarians. I'm interested in corporate governance and learning Roberts Rules of Order definitely helps.

I'm also a certified farmer (yeah its a thing), I have 5 sailing certs, 3 scuba certs, Wilderness Emergency Medical Responder cert, working on my pilots license, getting my real estate sales license, ham radio operator general class, almost done with my CDL, there's lots more I'd have to check my notes on.

I do want to get a Kubernetes cert done this year. Long term I want to knock out my CPA/CFA exams, but those are a huge commitment so we will see if it pans out.

Most of this response hasn't answered your question at all, because certs really are mostly useless. Still fun to collect.

I'd imagine financial certs would be the most useful (CFA in particular).

If anyone knows any other fun certs let me know.

Top comment by wild-eep

I left earlier this year, and worked in the space for a couple of years. I started with a neutral position on crypto, I didn't even think about it much. More like, its easy and pays well. But if people want to waste their money on tokens, go for it, but its not my cup of tea.

Then I started to recognize that its predatory and actually not benign or harmless. It is taking advantage of people. It started to become gross, and nobody I worked with shared any views or skepticism that I had. They all pretty much treated it like a religion across the board. Glad I left. I turned down money that most would not get a chance at in their lifetime, but where that money came from did not sit right with me, nobody in that company deserved any of it. I guess I was not cutthroat enough ... seems like this space is reserved only for snakes, and fools.

Top comment by armchairhacker

Git submodules are fine and can be really useful, but they are really hard. I've run into problems like:

1. Git clone not cloning submodules. You need `git submodule update` or `git clone --recursive`, I think

2. Git submodules being out-of-sync because I forgot to pull them specifically. I'm pretty sure `git submodule update` doesn't always work with this but maybe only when 3)

3. Git diff returns something even after I commit, because the submodule has a change. I have to go into the submodule, and either commit / push that as well or revert it. Basically every operation I do on the git main I need to also do on the submodule if I modified files in both

4. Fixing merge conflicts and using git in one repo is already hard enough. The team I was working on kept having issues with using the wrong submodule commit, not having the same commit / push requirements on submodules, etc.

All of these can be fixed by tools and smart techniques like putting `git submodule update` in the makefile. Git submodules aren't "bad" and honestly they're an essential feature of git. But they are a struggle, and lots of people use monorepos instead (which have their own problems...).

Top comment by Gaessaki

Lots of great advice here. One piece of advice I got from my publisher when authoring a technical book was to first break down what I was going write into headings and subheadings, and if possible, into sub-subheadings. Then review that to see if your flow is coherent and whether there are sections that are missing, or could be extracted into another text.

Personally, my process after this is to express my thoughts in bullet points, followed by inserting any placeholders and captions for any graphics (e.g. charts or diagrams), and then finally I start rewriting my bullet points into proper sentences, expanding my examples, and adding any interstitial text necessary to make things flow.

Also, I see some comments on keeping things short and to the point. In general, I agree with this, but depending on the medium, sometimes it doesn’t hurt to inject a bit of personality into your writing. Technical writing can be dry at times, and this can deter engagement. Try to use concrete examples whenever possible or refer to other supporting texts.

Top comment by joschmo

This is an ideal time to raise seed money. Many funds have moved heavily down market away from the big Series B/C's of 2020/2021. You now have a lot of tourists at the seed stage who are obligated to deploy capital and even if it is at 10-20% of the previous rate, that's still five $2M seed rounds for every $50-100M series B/C that used to get done. You won't get a killer valuation like 2020/2021, but you will have plenty of opportunities. My recommendation is to look for a SAFE and hope the market clears in 2-3 years when you go to raise your A round (this implies you need to give yourself 2-3 years of runway with your seed money and/or get to $1M of ARR faster on decent unit economics).

Some very juicy seed and series A money is being thrown around ignorantly by the same people that caused the last bubble. I've had 2 close friends / family raise their seed rounds in the last 4-6 weeks.

Three things to be aware of:

1. VCs will take their time doing real diligence on your market / team. This means it will take 1-3 months from initial outreach instead of 1-3 days.

2. You should also be raising seed money from angels that are executives/fellow founders at your early customers / pilot partners. Ideally you fill a $2M seed round with ten $50-100K checks from these people and a great seed fund that will be value add-oriented.

3. Raise as much money as you can. In 2021, this was terrible advice. Now taking 15% dilution is not the end of the world if it is how you stretch to your next raise vs the 8-10% dilution of yesteryear.

Top comment by 5id

According to @dang (https://news.ycombinator.com/item?id=28479595) via @sctb (https://news.ycombinator.com/item?id=16076041)

  We’re recently running two machines (master and standby) at M5 Hosting. All of HN runs on a single box, nothing exotic:
  CPU: Intel(R) Xeon(R) CPU E5-2637 v4 @ 3.50GHz (3500.07-MHz K8-class CPU)
  FreeBSD/SMP: 2 package(s) x 4 core(s) x 2 hardware threads
  Mirrored SSDs for data, mirrored magnetic for logs (UFS)

Top comment by ageitgey

In my experience, the time to make money selling mobile apps was 2008-2012 or so. The market is totally different now.

In my opinion, has never been harder to make money charging money for apps:

- Competition is huge (including underhanded competition that will clone any successful app instantly)

- Customer appetite to pay for "apps" is near zero. Customers don't even want to download more apps, let alone pay money for them.

- Programs like Apple Arcade and Google Play Pass have further eroded the idea of paying for apps

- Ad-driven trash apps and poor app store ranking/discoverability have further driven consumers away from trying new apps in app stores

Of course lots of successful businesses use apps as part of their business model. But very few are making money from selling the app itself unless they have a really good niche figured out.

Top comment by 5e92cb50239222b

> VM to only run a browser in there, to keep the memory under control

For other Linux users out there — a VM is not needed for this, use a cgroup with memory limits. It's very easy to do with systemd, but can be done without it:

  $ systemd-run --user --pty --property MemoryHigh=2G firefox
The kernel will prevent Firefox from using more than 2 GiBs of RAM by forcing it into swap (including all child processes). To quote systemd.resource-control(5):

> Specify the throttling limit on memory usage of the executed processes in this unit. Memory usage may go above the limit if unavoidable, but the processes are heavily slowed down and memory is taken away aggressively in such cases. This is the main mechanism to control memory usage of a unit.

If you'd rather have it OOMed, use MemoryMax=2G.

It's actually very useful for torrent clients. If you seed terabytes of data (like I do), the client quickly forces out more useful data out of the page cache. Even if you have dozens of gigabytes of RAM, the machine can get pretty slow. This prevents the client from doing that.

There are lots of other interesting controllers that can put limits on disk and network I/O, CPU usage, etc.