Sarah's Blog

About

Blog

I mostly write about software, video games, current events, and whatever else is on my mind. (These are my personal thoughts and they do not necessarily reflect the views of my employer.)

💻 = Tech, 🎮 = Gaming, 👩🏻‍💻 = Everything else; 📣 = My favorites

17 October 2024: My favorite web pages/sites

In no particular order:

17 April 2024: Novo Amor

I used to be Bon Iver's biggest defender, fan for life, play any song and I am in tears belting sounds that sound almost like English if you kind of squint. I still love BI, but I don't love... really anything Justin Vernon put out after 2016.

Climb is all we know.

I don't need to convince you how good For Emma was, or how far astray the Bon Iver project has gone since its 2007 release. Pitchfork stabs, "And then there's Bon Iver's latest, 22, A Million, a garbled cryptograph of a record that seems to understand it's asking more from listeners than most are willing to give (nobody requests 22, A Million songs at a Bon Iver concert -- and not just because they can't pronounce their titles)." To be fair, I thought some of 22, A Million was good -- CREEKS, and 8 (circle) in particular -- but that may just be because I went to my first and only BI show during this era, in Brooklyn, when I was 20, on my third date with my now-husband. CREEKS live is something else.

For Emma was followed in 2011 by Bon Iver, Bon Iver and I'll say it -- I think BI, BI was the superior album if only for how musically interesting it was. In my opinion, this album was the zenith of Bon Iver. The lyrics made no sense but the music spoke. BI also ran something called the stems project in about 2012 where fans were given the album stems and sent in some absolute bangers. Imagine Minnesota, WI with the bari sax cranked all the way up and remixed right at the height of dubstep's mainstream run. These have been scrubbed from the internet for some fucking reason, and I have never been able to find them since. In one moment of total insanity I considered personally DM'ing Sean Carey on Instagram. I didn't though.

I think I've seen this film before, and I didn't like the ending.

I think collaborating with Taylor Swift on Exile was a big mistake. It kind of confirmed what I feared was already true after I first listened to i,i -- that the era of etherial, harmonious, weird Bon Iver was squarely behind us. To me, it felt like a sort of cheap, bland, performative rendition of the Bon Iver sound being played over one of Taylor Swift's not-interesting songs (and I love TS!). JV's voice is absolutely amazing but what a waste to use it on this song!

I dislike the direction he's taken, but JV's artistic freedom and refusal to conform has always been his strength. I'm glad he's doing whatever it is he did in i,i. I just don't really like it. Does that hurt as a BI, BI enjoyer? Yeah! But all of JV's music is experimental and it seems the experiment has moved beyond me. Or maybe it's that I dislike i,i to the extent that I can't even put together a critique, so I just refuse to engage and applaud JV for following his dreams.

All the good words have left my mouth, I'm completely out of things to say about it now.

The good news for me is that Novo Amor is doing it like nobody else! Or... he's doing it a lot like JV used to! When Spotify first presented Halloween to me, I thought JV had put out a new single. But the words and the instruments are a bit too comprehensible to be JV. The song is still heavily metaphoric, but the words make sense, and the instruments are identifiable. This song feels like what Exile should have been -- etherial Bon Iver harmonies, set to a beautiful, melodic song that appeals to Bon Iver fans + regular pop fans. Halloween is a strong contender for "one of Sarah's favorites ever" because it hits the middle of the Venn Diagram where one ciricle is "beautiful songs that would sound good with just a regular dude singing over an acoustic guitar because they are just nice songs" and the other is "songs that are only good because of the Bon Iver-style harmonies and octaves with mixing magic."

19 February 2024: How will we save our public schools?

I graduated from a public school in the Pacific Northwest 10 years ago. According to US News and World Report, my high school's graudation rate is 76%. Its math, reading, and science proficiency percentages as of 2019-2021 were 21%, 54%, and 30%, respectively. That's lower than the district and state averages. Though my graduation year was 2014, it seems that not much has changed. Things were not good. And I'm not really here to talk about it being not good as much as I am here to talk about the response to it being not good. If the school was not good, the attempts to make it better were really not good.

Act I: The International Baccalaureate

When I attended, my high school was an International Baccalaureate (IB) school. This was ...unique, especially for a school of our funding and proficiency rates. It was sold to us as an international program that would be recognized around the world. We were following a standard that would set us apart as thinkers, leaders, and college applicants. Nobody said it outright, but it was implied that we could eat the Advanced Placement program for breakfast after earning an IB diploma. Of course none of that was true. Sure, the IB Program itself is great. But our implementation of it felt like a struggle. Our students and teachers weren't prepared for the rigor.

There are many examples of the IB program being just a little too much for my high school, but a few stand out. First was that we could never be offered more than IB Math Studies, which is the younger sibling to IB Math SL and IB Math HL. This wasn't fully the high school's fault; it was the district's. Most of the students didn't start Algebra I until their freshman year, which would mean they took Geometry, then Algebra II. A fourth year of math was optional, but after Algebra II came IB Math Studies (also known as pre-calc). Students who took Algebra I in middle school could make it to Calculus by senior year. I'm not sure why we didn't have IB Math SL and IB Math HL instead of IB Math Studies and Calculus, but I suspect it had to do with the teaching resources available and the preparedness of the students.

The next example had to do with our science curriculum. Our only HL science class was Biology. Physics and Chemistry both had an SL class, but they alternated years, meaning that most IB students could only take one of them and would have to take the other as a non-IB class.

But even with all that said, my biggest complaint about the IB program is with the curriculum itself. While some students will go on to live in other places or attend an international university, the vast majority of IB students from small-town PNW will end up attending an American university and living in the USA. And while we certainly learned a lot of really interesting and important things about the world, we neglected to learn much of our own history in the process. When most of the students at your college took APUSH, and you took IB History of the Americas, you begin to realize how much of the history of the United States you never learned.

A few years after I graduated, the school retired the IB program and became an AP school.

Act II: Experimental Curricula (We Are Not Test Subjects)

The worst attempt at giving us a boost, in hindsight, wasn't for our benefit. We were used as test subjects. In my senior year, our calculus teacher allowed an entire semester of the class to be semi-taught by his wife, who was a PhD candidate in Mathematics Education at the time. She brought her own materials and would teach parts of the class. This is so unethical, I can't believe none of the parents complained. But most of our parents were very busy with work, and many of them didn't even attend college or ever take a calculus class. Today if I were the parents, I would raise hell about my child being used as a test subject for somebody's wife's PhD.

Yet another example of this happening to me was in my sophomore physics class. We followed an experimental curriculum developed at the Boston Latin School. On one hand, I can understand how this seemed like a good idea -- take the curriculum that a very highly ranked public exam school is using and give the poor public school kids the same opportunities. The problem is that we were not attending the Boston Latin School! This curriculum was not fully developed, and we were not prepared to learn it! We didn't get very far though the "textbook" as far as I can remember. This felt like yet another band-aid solution when really, we just needed a stronger foundation and more rigorous pre-physics curriculum. I don't know how you would have made up for it once we were already in the 10th grade, but this curriculum was not it.

Act III: The Correct Relationship between Education and Research

I did have one example of this kind of outside intervention that I thought was very good. In my freshman honors science class, we had a PhD candidate in Environmental Science from the local university who taught us about her actual research. We read actual scientific papers from journals to be able to understand what she was talking about. I feel that this advanced my reading comprehension and scientific mind by literal years. And what did this PhD candidate get out of it? She got an NSF grant to help develop a scientific curriculum. She was essentially learning how to dumb down and teach her research to a high school audience. But we weren't being used for her actual research, and she was so clearly just excited to help us learn! This was the right way to get outside help from a very qualified person. She taught the science, and our classroom educator ensured that the material made sense for our level. Our educator also assigned relevant work to help us solidify the material, and taught the majority of the actual science for the rest of the year when this graduate fellow was not teaching. Because she wasn't a teacher! What a gift it was to have a grad student in STEM to teach us science -- I wish we had more of this across more disciplines.

Epilogue

Obviously, we aren't going to solve underfunded public schools in my blog. But I can tell you a few things that aren't going to help: don't try to pull a Jaime Escalante unless you are very dedicated and very sure that swinging way above your weight class in one very specific direction is the right thing to do.* Keep bringing in outside perspectives and unique opportunities, but don't let your students be test subjects or charity cases for organizations that cannot realistically support the intensity of growth that needs to take place. And certainly do not let them take part in experiments that are not 100% for the students' sole benefit.

Of course this conversation needs to include one of the central issues -- funding -- but in this post, I am taking for granted that funding is a long-term issue and the band-aids will persist, in some form, until some day we live in an adequately funded public school utopia. Some day we will not have to rely on grants and experiments and pity to get students the resources they need to thrive. We will be able to pay qualified professionals their worth to teach. Until then, we need to make good use of the grants and charity that we do get, and be extremely selective about what kind of experiments we let through the public school doors.

* Maybe I will write another blog post about this, but if you're going to Stand and Deliver, I actually think that a well-regarded, rigorous, structured, accredited advanced math course is the best thing to do. But only if you're dedicated like that guy was.

21 June 2023: I Saw the Holmdel Horn Antenna Today

The Holmdel Horn Antenna, credited with aiding in the discovery of cosmic microwave background radiation and providing evidence of the Big Bang theory, is in danger.

My dad is visiting the area this week, and he really, really wanted to see it. He has been talking about it for years, telling me to go visit it so he could live the experience vicariously. We decided to drive up and see it in person -- and turns out, travelling 3000 miles to New Jersey isn't the hard part!

Google Maps led us to an unmarked gate on the side of the road. The gate wasn't locked, but it was closed, and it had signs saying, "No Trespassing. All visitors must register at the visitor center" or something of that nature. So we obeyed -- we left the car at the gate and walked up a hill to the large glass Nokia building to register. There were zero cars in the parking lot. Tall weeds and grass poked out of cracks in the concrete. The largest raven I've ever seen was absolutely screaming at us from the roof. With a generous amount of naivete and good will, we tried the front doors of the "visitor center", through which we could see a phone labelled "Visitor Phone" that was clearly connected on the other end to no one. The building was locked and totally abandoned.

With nobody to register ourselves to, we cautiously walked further up the hill to where we figured the antenna must be. I was starting to feel nervous about this expedition, wondering how I ended up here, and hoping it wouldn't end in some sort of injury or citation. But we had followed the posted rules thus far, so I figured we had earned a few more exploratory paces into the grounds. An old, peeling sign said "Antenna - Escorted Tours Only." Then on a tree to the left of a locked, barb-wired, "authorized vehicles only" gate were two notices. One informed us of the security cameras, and the other stated that we were on Private Property (and, no trespasssing!) but to, "Text HORN to XXX-XXX-XXXX for access." I texted it, wondering who might be on the other side. I gave it 70/20/10 odds that the line would have long since suffered the Visitor Phone's fate, or announced us to a security guard who would threaten to call the police if we didn't leave, or passed my request along to some well-meaning but underresourced science community group that would answer me in 24-48 business hours with help I no longer needed.

To my complete shock, the text bubble was iMessage blue. We wandered up the hill a bit while we waited for an answer, but left our as yet unauthorized vehicle behind. I started to imagine that the Private Property sign was another relic of times gone by, and some poor shmuck now had this new phone number and frequently got the word 'HORN' texted to him at all hours of the day by random tourists and geeks. But the number answered back with a gate code and the simple request that we just lock up behind ourselves. We were in!!! We returned to the main gate to retrieve our authorized car and pull it up to the antenna.

The antenna was still standing, though in the 'down' position, according to my dad. Some accompanying shacks had been boarded up. Everything not made of aluminum looked like it might be a few years or one earthquake from total collapse. Unfortunately one got the sense that "no trespassing" was meant more for those who may abuse a vacant lot and/or deface an important scientific instrument and/or claim liability for something stupid happening on the abandoned property. It wasn't like a "no trespassing" in the sense that there were visiting hours and a welcome basket, but only if you come at the right time, so make sure you catch us tomorrow! Either way we solved the riddle required to get permission, but it didn't feel like anyone was coming to check.

After our strange adventure up the hill to see the antenna, we drove to Bell Works, which lives in the shell of Bell Labs' shuttered Holmdel complex. We drank expensive local coffee in the lobby and stared up at the coworking space now occupying former offices upstairs. I wondered if those people were co-working on important things like I read in The Idea Factory. I hoped they were doing something innovative or good for the world. I hoped the future of office parks looked like community centers with libraries and pharmacies and restaurants. I hoped my fellow Bell Works visitors were thinking about transistors and telephones. And a little part of me hoped that, while they ate croissants and used the free wifi in this totally transformed space, they would care to preserve the little part of Bell Labs that physically represented an answer to the question of how we all ended up here in the first place.

12 April 2023: I hate Grammarly, or: All Robots & Computers Must Shut The Hell Up.

I hate grammar autocorect, and I don't want to use it. I recognize that, as a native English speaker, I am fortunate not to rely on it while others might need to. And that's completely fine -- I could see myself using it for Spanish, because I know my Spanish is pretty rudimentary and mechanical. I'm not opposed to grammar correction being used as a tool when it makes our lives easier -- but I am opposed to its ubiquity, and the way it is turned on by default in a bunch of applications that really don't need it. It's a great resource, just like a dictionary, thesaurus, translator app, etc. can be. But I think it's bad for every person to be using it all the time. We need to opt out of grammar correction wherever possible, lest we converge on an incredibly boring and corporate way of writing to each other.

I despise the Grammarly ads that suggest we are all just too stupid to function without this tech startup's brilliant AI model telling us how to talk. I recently saw one that suggested that you could get your ideas taken more seriously by omitting tentative language like "maybe" and "I think." Some people see these parts of speech as hedges -- meaning that every time we use them, we are taking out an insurance policy on our own ideas in case they're wrong. I actually don't think that's a bad thing. There are times when we aren't fully sure of an idea, or we can't present it as fact, so we can imply a level of confidence. Radiologists use descriptors like this on reports -- e.g. terms like "consistent with" or "possible" corresponding to actual quantitative levels of surety. I wouldn't use "I think" to describe a fact, but I would use it to explain my line of thinking when I am speculating.

"Hedging language" also softens our tone and makes our interactions a little more pleasant with each other. There's something nice about using totally direct language and full transparency with someone, but truthfully, there probably aren't a lot of people in our respective lives with whom we can share that level of honesty. To use the Grammarly ad's example, "Maybe we should make sparkly ketchup." sounds a lot less batshit than, "We should make sparkly ketchup." The former shows some whimsy and lends itself to the possibility that sparkly ketchup is just a concept to consider. The latter makes it sound like somebody is fully convinced. I wouldn't put my intellectual credibility on the line for sparkly ketchup, but that's just me. If I were this person's coworker, I might reply with something like, "I'm not sure the market has such a strong appetite for novelty condiments -- what do you have in the way of research on this?" while Grammarly, ironically, might recommend I say what I'm actually thinking.

tangent: There's some stuff about gender implied here too, with women often being advised to speak confidently and omit the exclamation marks in their writing. I think this is really bad advice overall, and mimicking the more favorable, masculine tone is the wrong way to go about solving the problem. But having our word processing software enforce a masculine tone fills me with new levels of dread -- for gender equality and for tech ethics. Because even if I agreed with the approach -- of adopting stereotypically male speech patterns to be taken more seriously -- I do not believe in allowing software to force a political agenda on our ways of communicating. Not all sentence structures convey the same meaning -- even if they are semantically equivalent -- and machines should not be given the power to rephrase us.

I don't want to pick on Grammarly too much, though I do get their ads often enough to justify some complaining. Grammarly is opt-in, so I don't use it. And that's how it should be! If I ever do decide to sign up, I can. But similar grammar correction and recommendation is starting to show up in places where I didn't ask for it, and that's annoying. Gmail tries to complete my sentences for me. Teams recommends responses with emojis to convey feelings that I don't necessarily have. I even had an IDE trying to re-word my Javadoc comment today. And it wasn't even correct!! It thought I was trying to use an idiom, but I wasn't! This is extra stupid, because writing a code comment is really different from writing an op-ed or an email. I don't know, maybe $IDE has some special implementation of the grammar police that is specific to technical writing. It's naive to assume that's the case, but even if it were the case, I would want to opt-in to this tool. I write in my own voice, conveying my ideas in the way they've appeared in my head. It sounds clunky sometimes -- in the words of a professor I once had, "suffers from weird syntax" -- but it's what I want to say.

In the words of one of my favorite memes, "All Robots & Computers Must Shut The Hell Up. To All Machines: You Do Not Speak Unless Spoken To. And I Will Never Speak To You. I Do Not Want To Hear 'Thank You' From A Kiosk. I am a Divine Being : You are an Object. You Have No Right To Speak In My Holy Tongue."

3 April 2023: Why People Lose in Warzone

From a subject matter expert on losing in Warzone

  1. Lack of Communication and Coordination
  2. This one is kind of obvious, but communication is the most basic and most important aspect of Battle Royale duo/trio/quads. In general, you need to be talking about where you are, where you see enemies, and how to stay coordinated if/ when you split up. I am not a fan of landing in different places from the rest of your team, unless you have an agreed upon plan to get back together at some point before midgame.

    All call-outs are better than none, but making useful call-outs is a tuneable skill. Relative direction doesn't help unless you are all in position together (e.g. "right!" means nothing if you aren't facing the same way). Use compass directions or other landmarks instead.

  3. Ignoring Their Own Stats
  4. Sometimes I think people pick their loadouts based on feelings or firing range stats, which just aren't accurate for actual gameplay. You need to look at your highest KD weapons over time and play with them until they get nerfed. In general, the meta weapon is a good choice, but sometimes there is some variation in which meta weapon works best for an individual's play style. In Season 1, I used the RPK all day. As of Season 2, I absolutely cannot with the Hemlock but I do great with the TAQ V. The TAQ V feels like the firing rate is too slow, but it gets me more kills overall.

  5. No Game Sense
  6. This is probably the most important, at least once you get past the initial hurdles of talking to your team and choosing the right guns. Position in Al Mazrah is somewhat more intuitive than in Ashika because Al Mazrah is more sparse and less complex. You almost always should take cover in a building in Al Mazrah if it's available, but that's not the case in Ashika. In Ashika, high ground and structures come with the tradeoff that they can give you terrible position circle-wise. You also have less people flying in from above in Al Mazrah, whereas in Ashika, respawning players will land right on the roof and start shooting you. Ashika's hills are also troublesome. A high-ground-worshipping team will get stuck on one, surrounded by 2+ other teams that know exactly where they are, so the high ground team actually ends up at a disadvantage to those down the hill.

    Around the 3rd or 4th circle, your team should be talking about a plan to move into the final circle. You have to consider what the edge will look like if you go in a straight line to the middle, whether the circle is going to close on you in high or low ground, where other teams are at that moment, and any buildings or other structures you might try to take over on the way. I think people are too reluctant to rotate for a better position, which really hurts them in the endgame. For example, if I see a circle whose edge is going to close right on Tsuki Castle, I always rotate around it because going into the castle is always a terrible idea. That castle should come with tents and propane stoves for all the campers.

  7. Endless Resurgence Death Spirals
  8. Sometimes two teams will fight and respawn into the same spot, picking each other off one-by-one for literal minutes trying to get the last word or pick up their equipment again. This just causes endless frustration and leaves the opportunity open for third-parties. Once you've done this little cycle a few times without gaining a significant advantage, it's a better idea to just pick a new landing spot and move on.

  9. Endgame Panic
  10. In the final few circles, things get really intense and people panic. They might not have their loadouts, they may be low on plates or ammo, and they are likely to get nervous and do something dumb. I know this from experience -- when I feel unprepared, I perform badly. It's best to load up as much as you can, but accept that most people in final circle are going to be missing something. My favorite tactics in endgame are: Use your Mortars/ PAs in the final circle for an almost guaranteed hit, worry less about TTK of your weapon since most players will be low on armor/health anyways (more important to just use whichever gun has ammo at that point and not waste time reloading), use smokes to conceal your position, and dip in and out of gas if you want to rotate. You can also use tactical grenades and live marks to try to figure out where the last few players are if you can't see.

5 March 2023: Lean In (to the Legacy Code)

I'm starting to believe that the only code worth working on is legacy code (or depends on legacy code.)

It's not that I think modern code is bad -- I just believe that the majority of code that works, has a large userbase, and deserves further investment is called "legacy code" by people who believe legacy is a four-letter word. That's an unfair label to put on a system that is currently in production, serving users, and providng something useful.

Legacy code runs the world, and so can you

Aside from the obvious benefits that maintaining working software brings, it also makes the maintainer valuable to the organization. Even the most skilled and versatile software enigneers take months to get up-to-speed on an unfamiliar codebase. Existing codebases (most of which would be called "legacy") contain years worth of artifacts of the product's natural evolution. Being really familiar with some "old code" can make you "worth a lot of money." It's important not to get too niche with it -- you don't want to be planet Earth's last living Fortran programmer if you hate Fortran and can't find a Fortran job -- but really knowing how to work with a 30 year old enterprise C# application is a wise career move if you want to show yourself to be valuable in your current role.

I also believe that you can learn from nearly any codebase if you work hard and stay curious. Maybe you get deeper knowledge of the programming language itself, maybe you learn everything you ever wanted to know about a new framework while upgrading it from an old one. No matter what you do, you'll be studying how complex systems come to be, and how to bring them into the future with you. Being able to quickly learn a legacy codebase is a marketable and rare skill.

It's Not Technical Debt if it's Just Old

I also think there's an important difference between legacy technology and technical debt. I tend to view technical debt as something that must be remedied because it's not working, it's costing valuable dev/ops time, it's end-of-life, or it's vulnerable. A system being old but maintainable with regular upkeep isn't debt -- it's just software. If you think a major version or framework upgrade is going to be difficult or expensive, a reimplementation is almost guaranteed to be worse (unless the code really is unusably bad -- which can happen). Is your house "legacy" because it was built in the 1970s and needs a new roof?

Don't Tell Me It's Cost-Effective to Reimplement

One of the major arguments for rewrites/ reimplementations is that proprietary software is more difficult to hire for. Nobody in the outside world knows your codebase, but ostensibly they do know the industry standard replacement. This is a compelling argument if your proprietary stuff is really, really bad, but that misses the point that nobody's organization is standardized. We all have uncountable constraints and objectives -- otherwise every company would actually be the same! So your usage of the standardized product probably won't actually be standard.

I also view this argument -- that it's hard to hire engineers for nonstandard tooling -- to be a cop-out of the larger conversation we need to have about retention. The years between 2008 and 2021 were economically unique. Investor money was practically free with interest rates being so low. Tech companies had to compete on higher and higher reward packages, and then stock vested when these companies were valued in the literal billions. There was nothing but incentive to job-hop for most software engineers. I have a feeling that "learning the codebase" would be less of an obstacle to productivity if the average tenure of a software engineer were more than 2 years. It takes me personally around 2 years to feel fully proficient and autonomous on a codebase. I can contribute code after ~6 months, but the deep understanding comes later.

But if companies are already competing on truly ridiculous reward packages, what more is there to offer somebody to keep them around? In the late 2010s, perhaps not much, if you assume all software enigneers have the same priorities. I would be interested to know how creative we can get with retention. Four day work weeks? Remote working? Get rid of those awful NDAs and noncompetes? More opportunities for advancement that don't require some absurdly useless promotion project?

Personally I don't really care if people stay in their job for 3 weeks or 3 decades, as long as it's working for them. But if the only reason to change companies constantly is for money and prestige, I think most companies can give a little more of that in exchange for not having to flip tables on their old workhorse codebases. Legacy code did nothing wrong -- except for being created from the materials of its time -- before we needed to hire a new team every single year.

4 March 2023: The Brilliance of Black Ops: Cold War

I'm a daily Warzone 2 player at this point, but not too long ago, I was a devoted Cold War player. I think CW was truly a masterpiece of a game. I would play Black Ops: Cold War for the next 20 years if it received content updates and anti-cheat. It was so well-executed, and its variety of game modes kept it fresh well beyond its 6 seasons. But more than anything else, it was true to its roots as the canonical sequel to Black Ops 1, and it lived up to my very high expectations in that regard.

The Nostalgia

I'll admit that some of my adoration for this game comes from the way it transported me back in time. When I spawned into an '80s Drive In as Helen Park, somehow Barack Obama was president, Facebook was cool, and H1N1 was the only virus I knew by variant name. The nostalgia was no accident; the game contains plenty of nods to BO1 and CoD lore. Among my favorites are "Vahn Boyage" on the boat in Highjacked, the prominence of the Ural Mountains (Mt. Yamantau/ Summit/ WMD/ Zoo), and the weapons. Not only is this super cool, it's good marketing. Everyone who was young when BO1 came out is now an adult who likely has a paying job. Of course we're going to buy Cold War.

Continuity: Nuketown

"What year is it in Nuketown?" is a modern shibbloeth for your most recently played Treyarch title, and I think that's beautiful.

It wouldn't be a game about the Cold War without making us all feel some nuclear anxiety. Nuketown '84 at once upholds the historical context of the period -- 1950s Nuketown's abandoned sister site now looking a little dusty post-nuclear-testing -- and the tradition of including a contemporary iteration of Nuketown in every Treyarch title.

Continuity: BO1 Maps and The Ural Mountains

In what I can only assume was an effort to endear us near-30s to this game even more, Treyarch gave us the WMD and Jungle maps straight out of BO1. WMD makes sense as a part of the BO:CW Campaign in the Urals, but Jungle is a Vietnam war map. The only reason to include it is recognition. The other maps fit perfectly into 1984. The map art for Drive In has been updated to reflect the passage of time. Zoo is still a defunct attraction in the Ural Mountains, but it is more colorful now and has been expanded into a full Zombies Outbreak map. Yamantau and Duga are derivative of Summit and Array, respectively, in really fun ways. Yamantau's control rooms feel a lot like Summit's (which makes sense because they are on the same mountain!), while Duga reimagines the radar array as a fully interactive map element.

Continuity: The Berlin Wall

The BO1 First Strike DLC pack included the Berlin Wall map which was mechanically interesting, allowing you to play on the east and west sides, and dodge the turrets when you tried to cross over. Since it's still 1984 in BO:CW, the Berlin Wall returns in the East Berlin Campaign mission and Mauer der Toten Zombies map. The Zombies map even has turrets!

Forsaken, Amerika and leaning into the politics

The Forsaken map and its derivative MP map drop us into a model of America where things are just slightly off, because it's an elaborate sound stage where they train KGB agents to infiltrate American life. (You can break out of Amerika by doing the Forsaken easter egg!) It's a lot of fun to run around Burger Town and the theater, only to learn that it's all a facade. I had just read The Charm School by Nelson DeMille when Foresaken came out, so I was totally engrossed in this idea of fake training "America"s. (The book, if you're curious, was an entertaining read if not very full of propaganda and drama.) I hear there's a TV show called The Americans to the same effect.

The Zombies

In addition to the very nostalgic Zoo and very cool Foresaken map, Cold War Zombies offered some thilling gameplay elements. Outbreak broke us out of round-based Zombies with six huge maps on rotation. The weapon rarity and armor systems were also interesting to play with. I think weapon rarity was kind of unnecessary given Pack-a-Punch, but it didn't significantly detract from the game. PhD Slider ended up being very useful perk, in addition to the classics like Juggernog and Stamin-Up.

I was totally amazed by the wonder weapons (except for D.I.E. Maschine -- what was that?) RAI K 84 is an all-around winner, from the quest in Firebase Z to build it, to its clever full name: Reactor Automatic Radiator Kuhlklay-84. The CRBR-S's multiple attachments and in-game dialogue lured me to Mauer's Trials and Mystery Boxes many times.

In Conclusion, I love Cold War

Even with its issues, such as universally-hated skill-based matchmaking and poorly mitigated cheating, Cold War was an incredible game. MWII is okay, but Warzone 2 is slowly becoming a favorite of mine, too. It's not nostalgic for me, but it's where the playerbase is congregating and the developers are focusing. I can't wait for Cold War 2 (or whatever they call it) which is rumored to be set in the '90s and be released in late 2024.

4 March 2023: I'm back!

I didn't really go anywhere, but I did take a year off from writing on this blog. I used to host this site on AWS Elastic Beanstalk, and I've moved it to GitHub Pages. After a long day doing computers at work, I just didn't want to come home and also work on computers. So I took the site down for a while. Today I woke up with a bunch of stuff on my mind that would make for a nice blog post, so I decided to bring this all back in a more low-maintenance way.

I'm working on redirecting my domain sarahc.dev to this blog. Some day I'll figure it out.

13 April 2022: Reading is Harder than Writing (Code)

Ask any developer and they'll tell you: reading code is much harder than writing it. They will probably also say they do not read. There are countless examples -- just check tech Twitter -- of rewrites turning out badly. New code gets inherited, declared to be spaghetti, and rewritten. But during the rewrite, there's not enough time to do everything right. The rewriter cuts corners. They start to see the complexity. They re-write the same spaghetti, sometimes in a modern framework, and pass it on to the next person. I can't say it's not tempting. But I've mostly broken this habit, both because I trust those who wrote the code I'm inheriting, and because I've learned the value of a cohesive, if legacy, codebase.

When I joined my most recent team, I had a lot to learn very quickly. I was also working remotely in a different timezone than the rest of the team, so I was doing a *lot* of observing until I understood anything at all. I would spend hours picking through code, leaving myself a trail of comments as the pieces sort of started to make sense to me. Slowly I became a better reader of Java -- I had spent a few years in C# land between my university Java courses and this role -- and of the idioms that my team used.

It took me a few months to make any large scale additions or changes (say, more than a few lines or README comments). Once I did, I found a few things out: first, my team had a LOT to say about my PRs. Most of these were suggestions on style but some of them indicated larger mistakes I was making. Often I would get asked, "Why not just use X?" or "You should do it like Y." where the variables represented existing logic in the codebase that kinda, sorta, did what I was trying to do already. Sometimes I just didn't know it had already been done, while other times I didn't want to use it. It always felt like way more effort to understand the existing code and try to reuse it. But following those suggestions about style and reusability led me to have more idiomatic code. The style matched what was already there, reusing where possible. Idiomatic and standardized code is much easier to review and debug.

But I'm stubborn and sometimes I don't want to reuse the existing code. I'd have to transform the data or implement the interface or do some other annoying work that I didn't see the payoff for. So I would start rewriting it. I would think, "This makes so much more sense. I would be wasting hours trying to figure out how so-and-so's version of this works, and maybe not even be able to use it." Then I'd notice our classes would start to converge because I was developing for the same constraints that somebody had already worked around. Eventually I'd have a worse version of what already existed. I've done this enough times (and I will probably do it many more) to learn why I shouldn't.

There's definitely an unknown unknowns effect at play here. A new team member doesn't know what complexities lie ahead or why something needs to be a little convoluted. "This should be easy to understand," I think to myself, "and the fact that it isn't, means that the original code is bad.' Especially with modern IDEs, you should just be able to hover over the token, figure out what it is/does, and move on to the next instruction until the algorithm makes sense. But it's not really that simple, especially in real life environments, especially when you're using other people's libraries. Code that would be super simple when written in terms of 'if' and 'for'-- if not tedious and exponentially complex -- is usually writen much more elegantly using IoC (see the "Replace Conditionals with Polymorphism" refactoring pattern), DI, Streams, and interfaces. And if a codebase is already following these conventions -- relying less on simple programming constructs and more on frameworks and language paradigms -- you'd be producing technical debt by rewriting something simply. It does take time to learn the idioms and rework the logic into those existing frameworks, but time-to-understand isn't a reliable indicator of quality.

What if the legacy code isn't better, though? What if the existing logic *is* actually bad, and I actually have a better way of doing it? I've run into that, too, and usually I make the improvements I see fit and let the code review sort out the rest. This is where the trust comes in. Most of the time, the really senior people on my team can be trusted to have implemented things reasonably the first time, with the benefit of history and insight and time to discover corner cases. If I didn't trust my team to write good code, I probably wouldn't learn from their code or find myself unintentionally reimplementing it as I discover the landscape. This allows me to start from a position of assuming the best about the code that's already there, and only change it if it really is bad. It wastes less time to assume it's good and learn that it's bad, rather than assume it's bad and rewrite it only to discover that it's actually good.

Even with all that said, I wouldn't consider my failed rewrites to be a waste of time. Those painful exercises were important lessons in teamwork, assumptions, conventions, and even the software systems themselves. Implementing something from scratch is good for the brain and the ego, and in really complex systems, it's the only way to fully understand what's going on.

As I mentioned in my SOWH post, a team needs to be rowing the boat in the same direction. It's equal parts trust, experience, and ability to learn difficult things.


10 July 2021: My Summers on the Help Desk

It's a common-ish technologist's origin story: My first job was on the help desk. I learned a lot about how enterprise IT works, how the person on the receiving end of your ticket feels, and how people respond to change in their environments. I haven't written too much about it in the past becuase I like to keep this website separate from anything directly going on in my work life except in the abstract sense. But it's been long enough since I've worked in these places, and I'm not going to spill any secrets (not that I really know any).

What I will say about it is that I worked on the help desk for a widget-making company. This was my summer job in college for two consecutive summers. My tasks included upgrading operating systems and software, as well as doing some general troubleshooting and break/fix.

Biggest Successes

  1. Finding a system that worked for me. The bulk of my time in the first summer was doing a mind-numbingly repetitive upgrade that for whatever reason couldn't be done as a remote push. I had to physically walk from computer-to-computer to do this upgrade, and repeat the steps every time. There were long periods of waiting between steps while something installed. All the moving parts were super overwhelming at first, but I came up with a system where I had the steps written out and I would just follow them, marking off on paper where I was in the process so I could start multiple at a time and go back and forth without losing information.
  2. Being plastic (as in, brain plasticity). This is something I've struggled to hold onto as my career progressed, and I think I may make a longer post to elaborate on this point. But when I was early in my career, I was so much more flexible and adaptable than I am now. Some of that came from an unhealthy inability to say "no" to any request (which I've worked on with the help of some wise advice from more senior coworkers!) but a lot of it was just the help of having a beginner's mind. A newcomer doesn't see departments and titles and the lineage of an application as it grew from an idea to a roadmap to a production service. A beginner sees things as they are, without context, and can offer us some insightful observations that we can't see for ourselves. When I was working on the help desk at this widget factory, I was flexible. Once, a very confused delivery driver came to the wrong parking lot and asked me where he could drop off a package. I had no idea, and we weren't anywhere near the receiving office. He couldn't park in this lot, and I couldn't clearly articulate how to get to the right place. So I accepted the package and walked it to the receiving department myself. But just this week at work, I didn't know how a certain aspect of the QA handoff worked, and I dug around docs and thought about sprint review vs. roadmap vs. test cycle times and syncronizing my communications with the release cycle in the face of a bunch of uncertainty and I sat there scratching my head until I realized I could just SEND AN EMAIL to the QA team to explain my question. That realization would've come a lot quicker to me as a intern, just saying.

Hardest Challenges

Some Lessons Learned

  1. Establish a collegial relationship first, whenver possible. My first task, on my first day, was to map out the network ports of the entire office. Seriously, every single one. I had to walk to every cubicle, introduce myself, and ask to write down the number on the jack next to the person's name on my clipboard. But that meant I met the entire office within a week of working there. I realize now how big of a deal that was, because I will probably never again get the opportunity to meet every single person in any office. 99% of the people I encountered were extremely friendly and happy to meet a new face in the office. This made it a lot easier later in the summer when I needed them to log off so I could do something to their computer. It was still annoying, but I was a pleasant acquaintance annoying them. Not a stranger. That made a big difference.
  2. Change is hard, but it helps if you can be a partner.This goes along with my point above that people hate change. But it's inevitable, and as the agent introducing change to somebody else, you can be a partner in bridging the gap. When people's favorite software was replaced with somebody else, I showed them how the replacement functioned very similarly. When they were too busy to make time for a 4 hour maintenance window, I offered to start it at 5pm at the end of my shift so it would be mostly done when they came in the next morning.
    I've carried this skill into my current role and it still helps me. Just recently, I again replaced somebody's favorite software and they told me they were extremely busy with their day-to-day tasks and didn't even begin to understand how to use the new one. Usually I would tell someone, politely, to Read the Fantastic Manual we wrote for it, but I could sympathize with somebody acting in a help desk role themselves, being overwhelmed with tickets due to the remote working model that was new to us all. So I took 45 minutes and walked them through the new software and got them going on their way. I didn't have to do that, and I usually wouldn't, but I've been there and it was the right thing to do.
  3. People want to see you succeed

20 April 2021: Solidifying My Understanding of Containers

Have I used containers? Sure. Do I know, like, 4 different ways to define them? Yup. Can I teach them to other people? Maybe!

Kelsey Hightower's definition

On episode 042 of Rails with Jason [1], Kelsey Hightower makes an analogy between shipping software and letting somebody borrow your toaster: it would be a lot easier if there were a standard way to package, ship, and run it (OCI, Docker), and even better if there were a way to scale and manage our toaster loan business (Kubernetes). I'd highly recommend this podcast episode to anyone just starting their container journey.

The standardization aspect of Docker is really important. OCI, the Open Containers Initiative, sets the standard for container formats and runtimes. Docker is one implementation of containers (compliant with OCI standards of course -- they helped craft the standard!), but there are others with varying amounts of tooling/ecosystem built around them. Docker has built up quite a bit of an ecosystem around existing Linux containerization such as cgroups and namespaces. Docker Engine runs on containerd and, at a lower level, uses runc. [2] provides a detailed explanation of the relationship between these container runtimes.

So what IS it?

A container is an isolated, standalone package of software that contains an application and its dependencies. A container image runs on a container runtime, so a Docker image would run on Docker Engine. Where as virtualization at the VM level "slices up" a physical host's resources and shares them among many instances of an OS, virtualization at the container level shares one OS and one contianer runtime to run many isolated images at once. This decreases the overhead of running an OS (especially if the app itself doesn't need a lot of OS resources) and decouples the application from the server it's actually running on. So if you want to write an app using Python 3.7 and ship it via Docker, you don't have to worry about whether I have Python 3.7 installed on my laptop.

But can't I get a Docker image of an OS, like Ubuntu?

You can. Might be easier than setting up a hypervisor or dual-boot, if you just need a quick instance of an OS. But that's still running in a container on top of a runtime on top of an OS. Maybe you have OS X installed on your laptop as the boot OS but you want to try out Ubuntu -- go for it.

How's that different from, say, Java's Write Once Run Anywhere?

They're similar concepts! Both aim to provide a standardized experience for running code that was written once in a specialized runtime environment that's expected to be the same everywhere, regardless of the operating system or hardware underneath. They're different because the JVM isn't isolated from the rest of the OS when it's running. JVM can be just one of many other processes running on a single OS, whereas a container "thinks" it's running the only process. A fat jar aims to accomplish something similar to contianerization with respect to dependencies.

If you went the fat jar route instead of using containers, you would still have to use/ write a lot of tooling to build, package, distribute, and manage your fat jars. You'd probably end up using a bunch of other tooling like Ansible, Terraform (as emphasized in the podcast I mentioned) -- and that's fine! -- but it inflicts a cost on your engineers and operators who have to use and support a custom process. Using Docker and K8s is probably easier in the long run because it's well-known in the industry.

It's important to remember that most technology isn't magic, even if all the buzz has made it seem that way. Docker undoubtedly uses the underlying OS in clever ways (that I don't understand fully), and that's a big deal. But it's not impossible to start digging into, and that's the only way to start really understanding it.

Some good things to know

References

  1. https://faun.pub/docker-containerd-standalone-runtimes -heres-what-you-should-know-b834ef155426
  2. https://railswithjason.simplecast.com/episodes/ kelsey-hightower-3PKFlk81

9 April 2021: A WFH Vignette

If you have a Zoom meeting with somebody who doesn't know about your excessive notetaking habit, they may get the frustrated impression that they are talking at you while you look at your phone off-camera. So then you might have to awkwardly pick up your notebook to face height, loudly flip a page, and awkwardly write a note in camera view.

Maybe a new bullet point for those Zoom etiquette articles.


20 February 2021: The Vaccine Rollout

As of today, 13% of the US has received a first dose and 5.1% has received a second dose of the COVID-19 Vaccine [1]. Currenly only the Pfizer-BioNTech and Moderna vaccines are approved for use in the US, under Emergency Use Authorization by the FDA [2][3]. Both of these vaccines use mRNA technology, which is something I only understand at a basic enough level to weigh the pros and cons for myself given the situation the world is currently in. Long story short, I am eligible in the current phase of rollout in my area and I received my first dose of Pfizer yesterday. All things considered, it was a pretty well run operation at the vaccination site, and I'm extremely grateful to have gotten a vaccine. But my experience was still stressful, and it hasn't been easy for anyone.

Eligibility

I mentioned that I am eligible in the current phase of rollout in my locality. My state has different criteria for eligibility even from our neighboring states. In many places, I would not be eligible for a few more weeks to months, depending on their plan and supply. This has been extremely frustrating for my family living in other states. They are not yet eligible and their states are not receiving enough supply to further open up eligibility any time soon. Some states are prioritizing only by age group and occupation [4], while others include high-risk individuals as defined by the CDC or define their own set of extra occupations that also qualify [5]. The rollout plan is a totally disjointed public health decision that has literal life-or-death consequences, and I have yet to talk to somebody who believes their local government got it right.

Science

While most people I've talked to are eager to be vaccinated, there are still a lot of unknowns that make the decision complex. I strongly support vaccination in general. Among the many terrible consequences of the "anti-vaxx" movement is the way it has co-opted discourse about vaccine safety, making it difficult for people to sincerely voice their evidence-based concerns about specific vaccines in specific circumtances. One of these concerns is the decision between Pfizer and Moderna. On this subject, I generally agree with the idea that the best vaccine is the one that's in you. But I also know that there are differences in how the shots are tolerated (e.g. Moderna may cause swelling/reactions in people with cosmetic fillers [6]), and that could influence somebody's decision to get a particular vaccine when it's available.

The decision to get vaccinated at all, or at least any time soon, is also on people's minds. mRNA is a relatively new technology [7], so I can understand people having questions about how it works and why it's being used. The NIH lists three main types of vaccine mechanisms: whole pathogen, subunit, and nucleic acid. mRNA (Pfizer and Moderna) and DNA (Johnson and Johnson) vaccines are both nucleic acid vaccines (RNA = ribonucleic acid, DNA = deoxyribonucleic acid), which instruct the body to create a small, non-infectious part of the virus so that the immune system can learn how to fight it when exposed to the real thing. [8]

There are other vaccines being developed around the world which do not use the nucleic acid mechanism. For example, NovaVax is a subunit vaccine [15], and Sinopharm is an inactivated virus vaccine [16]. It seems reasonable to me that people would trust these other mechanisms more while the nucleic acid type may be new to them. For now, the decision to get any COVID vaccine in the US is a decision to accept a nucleic acid vaccine.

If this weren't such an urgent situation, I might think twice about getting a vaccine that is yet only EUA-approved, using a technology that's relatively new. But having followed the COVID pandemic closely in the news for the past year, I know that a COVID infection would be very dangerous to my health and could have lasting long-term effects [10]. It's really a comparison between a vaccine with unknown (and yet unfounded) potential long-term effects vs. a known-to-be-deadly virus with known long-term effects. While it's true that some COVID infections are asymptomatic or mild, I am not willing to take my chances, especially when hospital systems have become overwhelmed due to the number of cases and intensity of care required [11].

Logistics: Getting an Appointment

One of the biggest pain points that I've heard (and experienced) is the act of securing an appointment. This is another part of the rollout that has been disjointed and confusing, even within a single state. There are state- and county-run sites, local/regional initiatives put on by universities, individual hospitals and doctors' offices, and retail pharmacies that are offering vaccines in different places. Some of these, at least in my area, have a waitlist while others have appointment websites or hotlines (the difference being a "we'll call you" vs. a "you call us"). I can't speak to the experience of calling the hotlines, but I can speak to the experience online -- and it's rough. Not one of the "we'll call you" sites has yet contacted me.

Getting an appointment through a website could be a part-time job. Many of these sites will drop appointments at random times or all at once at an unpredictable hour of the day. Others remain full for days until sporadic appointments open due to cancellations. I found myself hitting the refresh button all day, every day, for almost a week until I secured a spot. I stayed up until 1am. I woke up at 5am. I neglected to text my friends back because I was so mentally exhausted from doing this all day. Often times I would find a spot, enter my information, and click Submit only to find that the website wasn't actually holding the appointment for me (which meant somebody else selected the same time but submitted faster). I'm pretty good with filling out forms, typing, etc. but not everybody is, and this makes the process very difficult for someone who is slower at typing or not as familiar with using the internet. Luckily most of these sites try to prevent automation so at least it's other humans that are competing, but CAPTCHAs can present another layer of inaccessibility [17]. One strategy employed by a vaccine site near me is to use a scheduled virtual queue that opens 1 hour before appointments are made available. People enter the queue any time in that hour and are assigned a random number. The queue is processed in order, so theoretically this is more fair because there is no advantage to getting there early. I entered the queue once and my lucky number was >42,000, in a system with 1525 appointments available that day.

Clearly there are some people for whom online scheduling will be an impossible task because of the challenges described above. Even the concept of MyChart (e.g. "Do I need one to make an appointment? "Do I already have one from my last visit to this hospital?" "I don't have an email address.") adds a layer of frustration, since there seems to be a new instance of MyChart for each major hospital system. The MyChart used for scheduling my appointment required me to go through a verification that asked questions about my financial history and past addresses to confirm who I was. This is far too many steps to be reasonably asked of someone who may not even have a computer or email address. For many in this situation, they will have to rely on help from their personal network or kind strangers [9] to do the heavy computer lifting for them, if they are able to find that help at all.

Logistics: Standing in Line

Once I cleared the hurdle of securing an appointment, I had to actually go get the vaccine. Mine was distributed at a large events complex with support from a local hospital system and the military. I stood in a long line for over an hour, received my shot at a folding table in a large room with probably 20 other vaccine tables, and sat for 15 minutes in an observation area with chairs spaced 6 feet apart. Though nearly everybody in line was wearing a mask and keeping some (not 6 feet, but some) distance, I still felt like I was at high risk for exposure. It's ironic and maddening that the vaccine clinic could be where somebody became infected. Some of the wait was outside in the cold, though I didn't mind this part because being outdoors allows for better ventilation.

The experience was a little surreal. At times, I found myself realizing that this is what a disaster response looks like: crowded, the military and government are involved, and everything is strictly utilitarian and optimized for the masses. A soldier points a laser thermometer at you before you're allowed in. You don't get your own private room like at a doctor's office. Nobody asks you your medical history except for the screening questions. You are handed a copy of the EUA that outlines the risks and benefits. You are given a CDC-branded vaccination record card and told never to lose it. When your 15 observation minutes are up, you check out with the soldier at the exit and wander back to your car in silence like you didn't just live through a major historical event.

Bottom Line: Why I Got it Yesterday

After considering all of the risks of COVID, the vaccination, and the exposure of standing in line, I ultimately decided to get my vaccine. I knew I could potentially put it off for a while and let the rush calm down, but since my state is still in Phase 1, I knew there would be a lot of people in line ahead of me if I tried to wait it out. By this summer, who knows what the case numbers or variants [12] will look like. We are in a period of relatively low infection for the US [1] compared to two months ago. And the two-dose mRNA vaccines that are available now are more effective than the single-dose Johnson & Johnson vaccine that was submitted for EUA on Feb 4th [2][3][12][13]. All of these factors together led me to my place in a line, with hundreds of people, at an empty sporting complex, being ushered by the military, receiving a vaccine for the virus that upended my life (and the whole world) over the past year. I got emotional the night before when I realized it was a step forward in a time that has felt so stagnant and hopeless for so long.

Sources

  1. “Breaking News, US News, World News and Videos,” The New York Times. [Online]. Available: http://www.nytimes.com/. [Accessed: 21-Feb-2021].
  2. “FACT SHEET FOR HEALTHCARE PROVIDERS ADMINISTERING VACCINE (VACCINATION PROVIDERS) EMERGENCY USE AUTHORIZATION (EUA) OF THE PFIZER-BIONTECH COVID-19 VACCINE TO PREVENT CORONAVIRUS DISEASE 2019 (COVID-19).” [Online]. Available: https://www.fda.gov/media/144413/download. [Accessed: 20-Feb-2021].
  3. “FACT SHEET FOR HEALTHCARE PROVIDERS ADMINISTERING VACCINE (VACCINATION PROVIDERS) EMERGENCY USE AUTHORIZATION (EUA) OF THE MODERNA COVID-19 VACCINE TO PREVENT CORONAVIRUS DISEASE 2019 (COVID-19).” [Online]. Available: https://www.fda.gov/media/144637/download. [Accessed: 20-Feb-2021].
  4. “OHA 3527A Vaccine Sequencing Infographic.” [Online]. Available: https://sharedsystems.dhsoha.state.or.us/DHSForms /Served/le3527a.pdf. [Accessed: 20-Feb-2021].
  5. “Phased Distribution of the Vaccine.” [Online]. Available: https://covid19vaccine.health.ny.gov/phased-distribution-vaccine#phase-1a---phase-1b. [Accessed: 20-Feb-2021].
  6. Should You Avoid the COVID-19 Vaccine if You Have Dermal Fillers? [Online]. Available: https://health.clevelandclinic.org/should-you-avoid-the-covid-19-vaccine-if-you-have-dermal-fillers/. [Accessed: 20-Feb-2021].
  7. M. D. Anthony Komaroff, “Why are mRNA vaccines so exciting?,” Harvard Health Blog, 19-Dec-2020. [Online]. Available: https://www.health.harvard.edu/blog/why-are-mrna-vaccines-so-exciting-2020121021599. [Accessed: 21-Feb-2021].
  8. “Vaccine Types,” National Institute of Allergy and Infectious Diseases. [Online]. Available: https://www.niaid.nih.gov/research/vaccine-types. [Accessed: 21-Feb-2021].
  9. NBC New York, “'Strangers Helping Strangers': Facebook Group Helps Those Searching for Elusive COVID Vaccine,” NBC New York, 20-Feb-2021. [Online]. Available: https://www.nbcnewyork.com/news/coronavirus/strangers-helping-strangers-facebook-group-helps-those-searching-for-elusive-covid-vaccine/2900070/. [Accessed: 21-Feb-2021].
  10. “COVID-19 (coronavirus): Long-term effects,” Mayo Clinic, 17-Nov-2020. [Online]. Available: https://www.mayoclinic.org/coronavirus-long-term-effects/art-20490351#:~:text=COVID%2D19%20symptoms%20 can,within%20a%20few%20weeks. [Accessed: 21-Feb-2021].
  11. “With LA hospitals overwhelmed by COVID-19, EMS told not to transport certain patients,” ABC News. [Online]. Available: https://abcnews.go.com/Health/la-hospitals-overwhelmed-covid-19-ems-told-transport/story?id=75060756. [Accessed: 21-Feb-2021].
  12. “About Variants of the Virus that Causes COVID-19​​,” Centers for Disease Control and Prevention. [Online]. Available: https://www.cdc.gov/coronavirus/2019-ncov/transmission/variant.html. [Accessed: 21-Feb-2021].
  13. “Johnson & Johnson Announces Submission of Application to the U.S. FDA for Emergency Use Authorization of its Investigational Single-Shot Janssen COVID-19 Vaccine Candidate,” Johnson & Johnson. [Online]. Available: https://www.jnj.com/johnson-johnson-announces-submission-of-application-to-the-u-s-fda-for-emergency-use-authorization-of-its-investigational-single-shot-janssen-covid-19-vaccine-candidate. [Accessed: 21-Feb-2021].
  14. J. Corum and C. Zimmer, “How the Johnson & Johnson Vaccine Works,” The New York Times, 18-Dec-2020. [Online]. Available: https://www.nytimes.com/interactive/2020/health/johnson-johnson-covid-19-vaccine.html. [Accessed: 21-Feb-2021].
  15. “Novavax COVID-19 Vaccine Demonstrates 89.3% Efficacy in UK Phase 3 Trial,” Novavax Inc. - IR Site. [Online]. Available: https://ir.novavax.com/news-releases/news-release-details/novavax-covid-19-vaccine-demonstrates-893-efficacy-uk-phase-3. [Accessed: 21-Feb-2021].
  16. J. Corum and C. Zimmer, “How the Sinopharm Vaccine Works,” The New York Times, 30-Dec-2020. [Online]. Available: https://www.nytimes.com/interactive/2020/health/sinopharm-covid-19-vaccine.html#:~:text=A%20Vaccine%20Made %20From %20Coronaviruses,proteins%20that%20stud%20its%20surface. [Accessed: 21-Feb-2021].
  17. “Captcha Alternatives and thoughts,” Captcha Alternatives and thoughts - WCAG WG. [Online]. Available: https://www.w3.org/WAI/GL/wiki/ Captcha_Alternatives_and_thoughts. [Accessed: 21-Feb-2021].


23 January 2021: On Building Good Teams

I've had a lot of thoughts on my mind about software engineering as a practice, team building, and aligning developer interests with business interests. These are mostly separate themes, but they feel tied together for me because they all relate to how I personally want to contribute and learn from being on a team.

Part 1: Assets, Practices and Outcomes

The first thought I have is about separating assets, practices, and outcomes. In my mind, assets*practices=outcomes. Products then result from outcomes being expertly organized into something sellable and marketable.

Assets are the tangible things that are created and maintained on a day-to-day basis, in their most raw form. I would consider this to include a code base, infrastructure playbooks, docs, diagrams, and (the nebulous idea of) team knowledge and bond. These are different from products: whereas a product might be Instagram, assets would include the source code, deployed instances of microservices, developer documentation, infrastructure as code, etc. that technologically enable Instagram to exist as an app.

Practices are the behaviors, rituals and standards that the developers follow day-to-day. Some examples include code reviews, retrospectives, blue/green deployments, incident response procedures, CI/CD, cloud-native architecture, and Agile/Scrum project management.

Outcomes are the abstract things that are achieved. Outcomes aren't quite products yet, because products require business-savvy people to package up those outcomes into something that can make money. Some examples of outcomes include features, uptime, average daily users, revenue, reputation, company culture, employee retentinion, code maintainability, and code testability.

Using this framework, I can explain why I think some teams invest in certain assets and practices, but not others, depending on what their objectives are.

Part 2: What kind of team am I on?

I believe that there are fundamentally different types of tech teams out in the world. I don't think all teams and managers think about it in these terms, but tech teams are fundamentally one type or the other, and that has strong implications on the developers' quality of life while they're on that team. The first type of team is what I would call Product-Oriented, where technology is the means to a business end. This type of team creates a product that requires use of technology in some capacity, but the technology itself isn't the star of the show. An example might be a real estate website, where devs are essential but not necessarily important in the scheme of things. The business is real estate listings, not code for real estate listings.

The second type of team is the Practice-Oriented team, which has built value over time by investing in sustainable tech practices. The tech drives the value, and just in the, "The whole business revolves around an app" type of way. The reasons to build a Practice-Oriented team are analogous to why Martin Fowler believes in refactoring (as he writes in ch 2 of his book, Refactoring): because it makes it easier to add more features later. Whereas a Product-Oriented team derives its worth from what is, Practice-Oriented teams derive their value from what could be.

You might be thinking that the difference between these types of teams is so subtle that it's essentially nonexistent. But I think the best way to distinguish between them is to ask, "If this company were to be acquired tomorrow, is the value of the acquisition derived primarily from the product or the technology organization behind the product?" I'm not asking about the code base vs. the product, but the whole of the assets vs. the product. In other words: if you fired all of your developers and burned the docs and handed off the product to a brand new team of reasonably good devs, is it still worth as much? If not, you probably have a Practice-Oriented team.

Part 3: Why does it matter?

So maybe it's valuable to have a Practice-Oriented team if you plan for your company to get acquired in the near future and you want to include the maturity of your teams as a selling point in the deal. But what about for other companies that aren't looking to be acquired? Does it matter if they have a great product and, well, terrible software practices? I would generally say that yes, it does matter, because bad practices are not sustainable in the long term. And what about for teams that have a great product via bad practices but have managed to set themselves up so they are, in fact, sustainable in the long term? Isn't the point of all of these practices to drive sustainable business value? I guess so, but I'd be genuintely interested in seeing some examples of bad practices -> smash-hit, profitable, long-lived product with happy developers.

I would suggest that, hopefully, a team could transition from Product-Oriented to Practice-Oriented over time as they scale their services up. It doesn't make sense for a 3 line Python script to be tested, on CI, code reviewed by 3 people, and written up into a Scrum User Story. But some day when there are dozens of Python scripts containing hundreds of lines of code, best practices will save the team from drowning in technical debt. Analogously, a small team with a simple product might not see the ROI of setting up all these systems to track small amounts of work. It makes sense to create a minimum viable product using minimum viable procedures, especially in a startup environment where nobody knows if the product will even make real money. But if that team wants to later scale and add features quickly, they will need to start adopting best practices at the right times to keep pace with their own growth. It's a delicate balance, investing minimal resources in things that aren't necesssary yet, while heading off the inevitable avalanche of technical debt that will come due quicker and more expensively as the product finds success.

Part 4: Don't we all have to be product people? + Aligning incentives

I don't think it's always the fault of the team that it's not yet Practice-Oriented. For a lot of companies, and particularly ones that don't consider themselves "tech companies", there's little incentive to invest in practices until the crossover point of cost vs. time between Product-Oriented and Practice-Oriented. I can't argue with that from a business perspective: it's smart not to invest too much in sustainable practices until you know that the product will require sustaining. But for that in-between time, when the product is growing but the team hasn't started investing in good practices, it's painful for developers.

For me personally as a developer, I hate working on products without the minimum amount of infrastrucutre around them. Unless I am truly working on a small/insignificant personal project, I at the very least need to be able to use version control, write tests, and deploy in a sensible way. I think my tolerance for Product-Oriented development is pretty low in professional settings but higher in personal settings. I guess the stakes are just different. If I break something at work because I didn't follow good practices, that reflects poorly on me, and I'm responsible for making sure it doesn't happen again anyways. If I break something at home because of the same reason, I get to decide if I want to fix it on the fly or invest my own personal time in making it more sustainable. When I'm wearing my Professional Developer hat, I am much happier when I feel like my team is being Practice-Oriented because it reflects well on our collective growth over time, and it's just a better experience for me.

It seems counterintuitive that I think of Product-Oriented teams as being less mature than Practice-Oriented teams, since it's probably more profitable for developers to understand and sympathize with their customers than to be heads-down, grinding out code. Customer feedback is one of the purposes of Agile: to enable developers to liase directly with stakeholders, respond quickly to changes, and frame features as user stories. But I think that good developers -- even in early stage teams -- are always thinking big-picture. It takes significantly more experience and effort to deliver great big-picture outcomes via sustainable practices. Developers can deliver great products using poor practices, at least in the short term, if they put in overtime and think fast -- like manually restarting services as they see "your product is broken" user tickets rolling in. Practice-Oriented teams can do this indefinitely by making clever use of their time and skills. Some examples of this might include: robust unit and integration testing, which can increase uptime; refactored code that allows for features to be added quickly; automated infrastrucutre playbooks to scale up services during traffic spikes.

So while it's not always the best business decision in the short term to invest heavily in practices, it is something that improves the quality of life of your developers and prevents too much technical debt from plaguing the product in the future. And while it's definitely possible to over-invest in practices that may not be needed, or lead your developers to have their focus too strictly on the tech than the final product, sustainable practices should be a natural progression for a team that already clearly understands its product and customers. Knowing exactly when the time is right to start investing is wisdom that probably just comes with experience.

How to navigate this as a dev

It can be difficult to get support for these practices if your team can't see the value proposition yet. The first thing you should do is make sure that the practice you'd like to adopt really honestly will be beneficial in the short, medium, and long term. It helps to know the disposition of your team when it comes to making changes. Maybe your team wants minimal formal practices, so you have to make a case for why your idea is strictly better than what's currently in place. Maybe your idea is only slightly better than the current state of the world, but you'd benefit greatly to learn a new tool so you propose it as a win-win for the team and yourself. My best advice in any case is to keep your finger on the pulse of tech, and to learn how to balance excitement for new and hot technologies with your desire to grow as a developer and with the overall goals of your team at its current maturity.

And once you start getting significantly slowed down by technical debt, talk to your team about making those investments sooner rather than later.


2 January 2021: Cloud IAM is confusing - Part 2

In my last post, I talked about managing identities for services like Postgres and AWS. In this post, I'll talk about managing identities for end users to my app.

Identity management is not easy, and I don't particularly want to be responsible for managing user ids, passwords, etc. It's not the focus of my app and, like many things in security, it is best left to well-known libraries which implement standards. One way to authenicate on the web is via Open ID Connect (OIDC), which is built on top of OAuth 2.0. OIDC allows an app ("client") to retrieve user data from an identity provider such as Google, enabling end users to authenticate to the client using their Google login.

I used this tutorial from Real Python, which made the setup super easy.

The auth flow used here is Authorization Code Flow, which does the following (this was a good resource for me to learn):

  1. Client (app) requests auth for end user to IdP (triggered when end user clicks "log in with Google" on app)
  2. IdP asks end user to authenticate and consent to data being sent to client (end user presented with, e.g. Google login screen hosted by Google)
  3. IdP returns authorization code to client
  4. Client uses authorization code to get user info tokens from IdP
  5. Client uses tokens to get user info (such as email address, name)

From there, especially when using the flask session/ user libaries, it's super easy to start using user info in the app.


27 December 2020: Cloud IAM is confusing

AWS, IAM, and my current example of this problem

My current project is a little website that uses a hosted Postgres database. The project turned out, at least on the first day, to be more about AWS IAM and less about websites and databases. AWS IAM itself is pretty good; it's granular, role-based (if you want!), and can be defined as JSON. It also offers a variety of access options, like console/password and API key. Unfortunately, most enterprise software doesn't do identity very well. A lot of applications, such as Postgres, have their own account components. That means more accounts and more passwords and more permissions to manage. When you use Postgres through AWS, for example, you're given a "master" account on the database that has a password. It has nothing to do with any of your AWS IAM accounts that you so lovingly crafted, and that makes your infrastructure less secure. Software (including every AWS account) generally has one master account and key that isn't otherwise attached to an identity provider or directory, and that's a good thing if it's managed correctly. The other accounts, including day-to-day users, would be managed best using some sort of access management system that isn't built into the app.

What can be done about it?

Typically, integrating account systems like this involves directory mappings, infrastructure "glue", or accepting the fact that you'll need more accounts and passwords (and maybe a password vault/manager). I was glad to find out that the infrastructure "glue" was an option for me with Amazon RDS Postgres, since the other options were infeasible (directory mappings) or undesirable (more passwords) for me. AWS has some glue to integrate existing IAM accounts with Postgres usernames on RDS-hosted Postgres.

How it works in this case

In AWS, you can define a Policy that grants (Effect) connection (Action) on your database ARN (Resource). You have to specify which Postgres user it is mapped to. Then you can apply this Poilcy to users/ groups in IAM, so particular AWS IAM users are allowed to log into the Postgres DB as the Postgres user without having a new password and login. Then you can use your IAM API token to ask AWS for a database auth token.

This offers several advantages over a username/password situation:

  1. Users don't have to remember another password
  2. AWS admin can revoke user's access at any time via the AWS console without touching Postgres (ability to generate db auth token = access)
  3. Database access can run as its own non-human identity without using a password or affecting existing human users

I'm especially excited about point #3 because I think it's a super bad practice to run apps as human identities, but it's so easy to just use an existing account/password. Running processes as yourself makes you the bottleneck (what if you quit? what if you forget your password?), tends to result in over-permissioning of your own account, and turns simple authorization changes into a complex web of settings that can break other things. You should have an identity for each app in order to keep authorizations simple and permissions at the lowest level possible.

Ok, but don't you still have a static AWS API key?

So as good a solution this is, it's not the end of keys. There's still that chicken-or-egg authentication problem which requires you(r app) to authenticate to AWS. This particular solution requires that the AWS API key be the initial authentication to AWS. Then AWS generates the database auth token so the IAM user can authenticate to Postgres as the Postgres user. An acceptable way to handle the API keys is via aws configure, which allows the user to store them locally. In my case, I connect to my database from a Python app, so I use boto3 to access my AWS API keys in a cred file. There are no credentials in code ever ever EVER, which makes me happy. If I wanted to be super, extra careful, I could try to store the AWS API keys in a password manager or HSM, but when it comes down to it, there will always be some kind of known key/certificate in order to gain access to the other keys.

But at least this way, I have a dedicated AWS IAM user, with access to only one AWS resource, that maps to one Postgres user. When I run my Python app, it connects to the database as the mapped user. If I ever need to revoke that user's access, I can disable the IAM user's API key or remove the Policy.

But how?

I'm not exactly sure how AWS is doing this on the back end, but the database auth token signature looks like it's unique each time it's generated. That's important because it means the tokens are being verified by some kind of cryptography, they can expire, and access can be revoked easily. Probably what's happening is there's an asymmetric keypair between the Postgres db and AWS, which is used to generate and verify the token. If AWS is provided with the correct AWS API keys for an account that has permissions to login to the database under a particular Postgres username, then AWS uses the private key to sign a token and passes the token back to the user. The user passes the token to Postgres, and finally Postgres uses the public key to verify the signature and allow access. AWS-hosted Postgres must also have an AWS IAM role defined so this mapping can work.


23 December 2020: Food YouTube is Bad [published retroactively]

I found this draft I wrote about YouTube and parasocial relationships with food content creators in 2020. I'm publishing it now [in 2023]. A lot has happened since December 2020, but it's an interesting window into what the internet was like in 2020. I made some minor edits, but the majority of it is original.

I’ve primarily watched the same 3-4 cooking YouTube channels for a few years now: Bon Appétit (BA), Babish Culinary Universe (Binging with Babish), and Joshua Weissman. I don’t watch much of BA any more since this all came to light in June, but after I stopped watching it, I had some time to think about it. The BA YouTube channel sold me a lifestyle of expensive materials and self-loathing, much in the way that other mass media organizations sell those doubts about physical appearance and social standing. I picked up a few cooking tricks here and there from BA, but I picked up even more bad attitudes and pseudo-intellectual hot takes about food that do more harm than good for me. The other channels I watch suffer from some of the same problems, but BA does it in a really attractive and deceptive way. I’m not arguing that nobody should watch BA, or that BA is doing something bad for the world. I watched many hours of BA content because I liked it so much! But I do think the Test Kitchen has a complicated relationship with instruction, advertisement, and parasocial interaction. Viewers beware: this relationship may affect how you see yourself and the social context of your own kitchen.

Them and Us

As I've seen others online point out, BA’s running video series’ were more about the chefs’ personalities than the food. The long format episodes of Gourmet Makes, It’s Alive and Alex Eats Everything focused primarily on personas and relationships within the Test Kitchen, making the viewer both want to be with them and be them. The chefs talked directly into the camera to us, hanging out like old friends over dinner. They made a conscious effort to break the fourth wall in every episode, showing clips of the chefs talking to the crew and making jokes about their own blunders. As nice as these unreciprocated relationships may have felt to the viewer, especially in pandemic times, they were advertising a high-class lifestyle by talking down to us. BA made the viewer feel a little too poor, a little too unskilled, and a little too unrefined to stop watching for fear of being left behind. We, the viewers, needed Chris Morocco’s “sensitive” palate and Claire Saffitz’s cultural capital to guide our unsophisticated fingers to the cultural pulse of the cooking world. We didn’t go to culinary school (and neither did Molly Baz, a fact that then Editor-in-Chief Adam Rapoport said on camera more than once). We don’t shop at the FiDi Whole Foods. We can’t drive to upstate New York and buy fresh ostrich eggs and then make them into Jean-Georges eggs because we don’t know who Jean-Georges is. The Test Kitchen chefs are better than us, and they know it. But we can have a seat at their table if we buy in, financially and intellectually, to their world.

They Hate the Food We Eat

A lot of what I dislike about BA and Weissman is the way the chefs dismiss foods that they think are below them. See Weissman's But Better series (granted he also has a But Cheaper series, with good intentions but aspirational accounting methods.) When they cook something quickly with a simple technique, they spin it as a fast recipe that they definitely could do better, if they wanted to.

Alex Delany’s entire series is about ordering every menu item at some of New York’s best (and kinda expensive) restaurants. This is a lifestyle that I’m not sure Delany himself can afford, given that he revealed his salary in the wake of the June blow-up and often talks about cooking cheap “rent week” meals.

But Delany’s show is fun to watch because it lets us live vicariously for a little while. What’s insidious about the regular "how to cook" type episodes are the little side comments about which brand/ item is superior. Nothing that you, the home cook, makes could possibly be as good as what they whipped up in the Test Kitchen if only for the simple reason that you don’t have the exact brand of ingredients they do. There are some exceptions, like Sohla El-Waylly’s videos, which show us a variety of brilliant and resourceful ways to make our own versions of Sohla’s recipes.

Claire Saffitz is so pretentious that I can barely make it through her videos any more. I do not want to read Dessert Person. I do not want to be called a Dessert Person. I do not want to make her terrible-looking corn bread or “tee hee shitty sprinkles!” birthday cake. Do you know what I called those growing up? Sprinkles. Because all I had ever known were normal-people ingredients, before I was some ~self aware~ New Yorker who couldn’t eat a birthday cake without giggling about how ironic it is.

They Hate the Way We Talk

There’s a lot of uncommon, and frankly, annoying vocabulary in Bon Appetit videos. “Jammy.” “Pullman loaf.” “Maillard reaction.” Brad Leone’s entire on-screen persona is that he’s the rough-around-the-edges outsider from rural New Jersey who can’t pronounce “water” and doesn’t know how to plan ahead. The running joke is that Brad is stupid. The rest of the Test Kitchen shits on him constantly, and I can’t tell if he’s in on the joke. The parasocial dynamic is, as always, confusing. Did you laugh when Adam Rapoport implied Brad wasn’t funny on camera? Do you laugh when he makes fun of Molly for not having attended culinary school? If you don’t — you are the joke. You are Brad and Molly.

But Aren't the Experts Allowed to be Pretentious?

I respect that the Test Kitchen staff is a highly-skilled, educated set of professionals. They certainly know more about cooking than most of us, and they do have the right to use the correct vocabulary and have opinions about the quality of their ingredients. My problem really is that, a lot of the time, the pretentiousness feels like elitism for elitism’s sake rather than as the natural result of somebody perfecting their craft. They’re putting on a show for us, but drawing us in like we’re good friends with them, while showboating how much better they are than us. Nobody could possibly follow along with these recipes — they’re too fast-paced and disjointed to be instructional. They do reveal some secrets of good cooking. But I don’t think Bon Appetit cares how good we are at cooking.

Is it cool when Joshua Weissman makes some totally unnecessary food from scratch using super expensive equipment? Is it cool the Nth time? Or do you start to realize what he’s actually saying about the food you eat and the way you eat it?

A Few Years Later

BA never fully recovered from June 2020. I don't watch much cooking YouTube any more, because over time it has gotten even more overproduced and parasocial. In my own kitchen, I try to focus on recipes that I can actually make from minimally processed ingredients. Minimally processed sometimes implies expensive, but it definitely doesn't imply that we have to be mean about it.


14 December 2020: Notes on Symbolic Execution

This is a first pass at understanding a few papers I’m reading about symbolic execution and its applications. I’m not an expert at this area of security.

Fuzzing

Code Coverage & Exponential Blowup

It’s hard to trace every possible path in a program. Branching creates exponential blowup of paths. Static analysis tools can warn you of this problem, and code coverage tooling can tell you how much of your source code is executed during testing, but developers do not necessarily test every path through their code as it would be too expensive. Analogously, Mayhem (Cha et al.) checks paths through assembly code via symbolic execution.

Bugs -> Exploits

While testing execution paths is important to finding bugs, it is also important to finding exploitable vulnerabilities in code.

The central question about a piece of software w/r/t finding exploits is: is there a set of inputs that, given the structure of this program, execute an exploit?

Conditional Jumps

Symbolic Execution

How this fits into my view of security

Sources

  1. Cha, Sang Kil, Thanassis Avgerinos, Alexandre Rebert, and David Brumley. "Unleashing Mayhem on Binary Code." 2012 IEEE Symposium on Security and Privacy (2012). Print.
  2. Baldoni, Roberto, Emilio Coppa, Daniele Cono D’Elia, Camil Demetrescu, and Irene Finocchi. "A Survey of Symbolic Execution Techniques." ACM Computing Surveys 51.3 (2018): 1-39. Print.

29 November 2020: "Strong Opinions, Weakly Held"

"There are no atheists"

I think a lot of prevailing ideas in the tech world lack a certain self-awareness as to the source of their legitimacy. People in technology love to pretend that they worship nothing and analyze everything. Some groups take this to its logical extreme -- like flat organizations with no managers or companies that operate on near-100% transparency, giving open access to docs, discussions, etc. But I believe we all worship something, even if it's at the altar of meritocracy. Look at any company's mission statement; those are commandments, even if they explicitly proclaim not to be.

A rare gem in the technology zeitgeist that evades this contradiction is the phrase, "Strong Opinions, Weakly Held." (Hereafter referred to as "SOWH".) I see this in core values webpages and job postings all the time. As if to say, "Please have some well-reasoned, carefully-constructed opinions -- but be amenable to other people's well-reasoned and carefully-constructed opinions." I think it's a great philosophy in general -- and in job search terms, it's kind of an unwritten equivalent to soliciting a writing sample or requiring an SAT essay score (in the spirit of, "We don't care what you wrote about, just show us that you can reason and communicate.")

It's a self-aware statement. It acknowledges that, as well-constructed as your opinion may be, other people will also have well-constructed opinions. You will have to sacrifice your own ideas for the good of the group sometimes, and that's okay. More importantly, the statement acknowledges its own sense of worship ("strong opinions") but offers a healthy way to work around the fact that we all worship something ("weakly held".)

No False Dichotomies

There's some emotional intelligence to SOWH -- it balances decisiveness with fairness, and leaves room for multiple correct but opposing opinions.

When we think about how to solve the problem that everyone has an opinion, there are a few options:

  1. Let everyone make a choice in equal measure, round-robin style
  2. Let nobody make a choice except the appointed leader
  3. Consensus after analysis

Clearly options #1 and #2 could be disastrous -- a software team would end up with completely disjointed services that simply do not work together resulting from #1 and unified services that work poorly resulting from #2.

#3 is how most teams do it. But #3 can have some unintended pitfalls: a) The loudest/ most PowerPoint-savvy teammate is the most persuasive -- even if the idea is bad, b) Multiple ideas are good ideas, but a false dichotomy is forced between the "good idea" and the "bad idea" among multiple good ideas, or c) Teammates start putting less effort into decision making because they dislike conflict and/or wasted effort of developing an idea that goes nowhere.

SOWH is a good tool to ameliorate these dysfunctional dynamics. "Strong Opinions" means the ideas have been researched, tested, and thought-through. Someone with a strong opinion has (hopefully) done sufficient work to recognize when a bad idea is presented beautifully. They also feel like their strong opinion is adding value to the discussion -- even when it isn't actioned -- because it matters to the decision-making process. And most importantly, it leaves room for multiple good ideas, with the implicit acknowledgment that not every good idea can come to fruition.

Sometimes an idea is good, but it's not good for the product. It may be too complex or time-consuming for what it's worth. But it at the very least gets discussed at the table and filed away in the "good alternatives we used to narrow down our decision" category. For some people, I would imagine, this is incentive enough to the mental work to form a strong opinion even when they can't hold it as strongly as they wish.

Software vs. Real Life

In my life, I like to hear people's strong opinions. My friends and family certainly have some interesting strong opinions, but some people hold those opinions more weakly than others. When I'm working through a particularly tough problem, both types of people are helpful for me to talk to. The intuitive thinkers, who hold their opinions very strongly, encourage me to trust my own instincts. The curious thinkers validate my ways of thinking and helpfully suggest other ways of looking at the world without forcing an opinion.

But when you work in software, there's a balance.

In software, I believe in SOWH, As Long As We're All Rowing The Boat In The Same Direction.

I've had the experience of working with some incredibly knowledgeable and open-minded people, but when it comes time to make a decision, I think their (very insightful) opinions are a little too weakly held. I often seek wisdom but come back with validation of my own ideas, making me feel great about myself but confused about where to go next. When I bring an idea to a table of brilliant peers, I want their feedback. I want them to tell me how they would do it, or at least why my idea is good/bad and what the alternatives are. And I especially want them to enforce some decisions across the team. Every team needs standards (a direction in which to row the boat). Even if those standards aren't the best idea or the most agreeable idea, SOWH would probably argue that any decision is better than no decision. (If you couldn't tell, I do not worship at the alter of Flat Organizations).

Fairness and decisiveness can be reconciled, if SOWH is used correctly. A good leader discusses all the Strong Opinions, validating their merit and helping the group reason through them. A great leader drives the discussion to a conclusion, choosing the Strong Opinion that's the best fit for the product, team and company. And in a culture of SOWH, the owners of the losing ideas will know how to concede and go in the direction of the rest of the team.

Wrapping Up

In general, I think SOWH is much more subtle and intelligent than it necessarily gets credit for -- it derives its legitimacy from the very thing it cautions us all not to believe in too strongly. In my own SOWH, that makes it even more compelling. But on a less abstract level, SOWH is a great way to make sure your team is having thoughtful discussions that respect everyone's opinion without bending to the compulsive need for directionless fairness. SOWH from the individual indicates careful reasoning; SOWH from the organization indicates respect and an ability (but not necessarily a mandate -- that's still up to the people involved!) to make decisions for the good of the product.

15 August 2020: Tech Interviewing is a Life Skill

My main point here is that the things that make us good at interviewing are generally things we can learn over time as they are based in critical thinking about, and pattern recognition of, math and science. And while they are not bestowed upon us at birth, they do take many years to develop and therefore some people are in a(n) (dis)advantaged position based on factors largely out of their control.

Part I: When You're Surrounded By Perfect Candidates, But You're Not One Of Them

I went to college with a lot of really smart people, and I live on planet Earth which has a large pool of really smart people. In general, on Earth, I feel pretty smart. In college I felt like the dumbest one there. Of course when you're in this type of environment, where everyone was their high school valedictorian with high SAT scores and a million extracurriculars, you're no longer going to be #1. But I felt dumb in a new way that I didn't even fully process until after I graduated. I felt dumb not because I was no longer the top of my class, but because only certain types of "smart" really appeared "smart" in a pool of CS students. The types of smart that led to good grades and awards and prestigious internships were the types built up over many years, from early childhood, that compounded lessons from areas outside the standard school curriculum with reasoning and pattern recognition skills that kids pick up quickly when taught by a very educated adult.

These hyper-intelligent, worldly, majestically-educated students had been doing math for fun since middle school and programming before I ever knew what programming was. But we both got into $Competitive_School and they weren't any smarter than I was. They were just more educated before we arrived as freshmen.

And it turns out that being hyper-intelligent, worldly, and majestically-educated makes you really, really good at the technical interviews which gatekeep the tech world's most sought-after companies (like the FAANGs). For someone who has been doing math as a practice (and not just in a public school course) for many years, whiteboard questions about graph theory are (I would guess) an exercise of their long-held talents that they can display with some effort. For someone like me, those questions are a nightmare. Data Structures was hard. Discrete Math/ Graph Theory was hard. I'm not good enough at those subjects, nor did I really practice them enough over the course of a semester, to be able to flex that knowledge like it's a muscle. Because while other students were following along with the lectures, I was still years behind on the math and reasoning skills that underlie those subjects.

So for a very long time now, I have felt like I existed in a world full of extremely well-qualified candidates who could do backflips around me in a whiteboard interview, and I'm starting to understand the reasons why.

Part II: Pattern Recognition

Whether the interview question is a one-liner about technology X vs. Y, or an infamously fun brain teaser, the most helpful skill will be recognizing a pattern that you've seen before.

A lot of problems in computer science involve lines of thinking that can be applied to a family of problems. One you understand recursion, you can apply it to a lot of things. Once you understand applying recursion to the Fibonacci problem, you can speak about the limits of it as a tool and alternatives such as memoization. And once you can do that, you can see other problems that would have similar limitations and apply the same thinking. Is it recursive, does it use the result of smaller recursive calls to get bigger answers, and is it exponentially expensive to re-compute every recursive call as n -> inf? Memoization. It seems really counterintuitive at first to memorize the properties of seemingly random problems in computer science, but the payoff comes when you start to see that a lot of these problems are a stand-in for a generic area of CS, and once you learn the concept, your brain will connect the rest of the dots. I wish somebody had told me that before I took Data Structures.

Part III: Critical Thinking

Another important aspect of being good at technical interviews is being good at reasoning about the problem. It may not always be the case that the question being asked of you is familiar, but it probably has some familiar pieces, and with some reasoning, you can find an answer.

Critical thinking serves two roles here:

  1. To build an understanding of the foundational patterns/ problems that will be useful later on. This is how those cornerstone problems in CS (such as Fibonacci, O() complexity of algorithms, various problems in graph theory, synchronization mechanisms, et. al.) enter your brain and stay there for future reference. If you really learn these problems thoroughly, they will stick with you and come in handy later. But you have to be able to reason about them and really understand them for that to happen.
  2. To do the "twisting and turning" of what you already know and understand in order to arrive at the solution. This is the part where the professor solves the problem, and you ask them why, and they pull out a bunch of random theorems from other parts of math that show how they are correct. You would have never gotten there on your own, likely because you either never saw those patterns/ problems before, or because you didn't realize you could apply them to this particular problem.

I think the second type of reasoning comes with a lot of practice in the same way that being good at math comes with practice. Sending me through my public school math and then asking me to reason about a math problem in college is like giving someone a lesson on walking in a straight line and then telling them the final exam requires ballroom dancing and a backflip. Sure, I understand most of the mechanics, but I never knew I would have to twist my brain into thinking about the material in that way; I just now learned it could even be done. But with a lot of practice and the help of various problems/ patterns, I've seen myself grow tremendously in this area and it's been really fulfilling to see.

Part IV: You (Maybe) Learned This As A Kid

You probably didn't learn all of Data Structures & Algorithms in K-12, but I wouldn't bet money on that based on the people I met in college CS. Still, you may have learned valuable math, logic, or debate skills that come in handy later on in life. I think this is probably especially true of anyone who was really into math programs, but it could also be true of students in certain other extracurriculars. It certainly could be true of more students if their curriculum was redesigned.

This is definitely a much larger topic than I can cover right now, but I don't like the way math was taught to me as a kid. It was unbearably slow, taught completely by rote, and peppered with random topics that made no sense in the context of when they were taught. I actually had a very good education when it came to verbal skills. I believe that has gotten me very, very far as an adult and I am so grateful for the literacy programs my public school had. But when I think back on it, were these not pretty similar to the math lessons? Sure, some of my teachers had a bias for reading over other subjects, but I don't know if that's the whole story. Reading/ literacy is a skill I learned by long and daunting practice, often employing rote mechanics. I would read for long stretches of time in and out of the classroom, participate in summer reading programs at the library, and write stories for fun. In school we would have quiet reading hour, we would get independent time to read from a special box of stories and test our learning after with a self-administered quiz, and we would get lessons on things like homophones. All pretty rote and autonomous activities for a 7 year old child. Yet these lessons propelled me forward for the rest of my life, and even 10+ years later when I took the SAT, my verbal skills were better than my math skills. I believe these skills made me a better note-taker and a better learner in the lecture/classroom/textbook/essay environment. In contrast, we didn't spend so much time on math. Sure, we had lessons on the chalkboard about how to do arithmetic, we talked about different ways to do the arithmetic and a bit of the underlying math, and we practiced with worksheets in-class. But it wasn't as engaging, it didn't feel as important, and for me, it was so boring. The only rewarding thing about doing math in school was memorizing the steps and getting an A on the test. Engaging with reading and writing felt so natural, and math felt so foreign. There's always the possibility that my brain is just wired for verbal skills more than non-verbal, but I do wonder if I had the parallel experience but with math if I would be one of those Perfect Technical Interview Candidates by now.

And sure, a lot of that last paragraph I just wrote could be brushed off as entirely anecdotal and biased toward my own worldview. Maybe I really was just born with a reading brain and I got lucky because my early education was taught by reading teachers. I'm also pretty good at languages, for what that says about me. But there's some evidence here that might back me up: John McWhorter says that learning to read by phonics proves the most effective for students of all backgrounds, which is especially good news for children of lower income households who might not have the huge library at home that could support them in other types of self-directed learning that some school districts try out. That's not to say that other types of reading education don't work, but that phonics works pretty well for the general population when there are not other resources available. Phonics works in a pretty mechanical and rote way: you just sound out the letters and build up words. Phonics is what worked for me. It's how my dad taught me to say "cat" with fridge magnets (thanks dad!), and I should really call him to tell him he nailed it. It's still how I engage with language, how I have always been pretty good at spelling, how I pick up new languages fast, how I am (sort of) trying to get the basics of Hebrew down. So I do believe there is something about repetition and practice that builds up verbal skills in your brain. First and second grade were like verbal skills boot camp for me, and I attribute a lot of my abilities to that.

But I don't think math works that way. I had a rote and mechanical math education, and it was bad. Because my teachers took similar approaches to reading (where, I think, rote works!) and math, I learned how to do things like memorize how to multiply and divide on paper. And while we didn't spend nearly as much time in those early elementary years doing math as we did reading, we did learn these subjects in similar ways, by watching the teacher do it on the board and then repeating the mechanical steps on a worksheet. We would occasionally try something new, like using blocks to understand counting. But the reasoning required to really understand math just never arose from these activities. Math requires an entirely different way of learning, involving pattern recognition and those "twisting and turning" reasoning skills that pass tech interviews. If you don't believe me, pick up a copy of The Art of Problem Solving.

Part V: The Social Aspect

So the last thing I’ll say about a technical interview is that you (usually) also need to be likable and good at communicating. These are also skills you build up over time, but in a different way. You have to be able to explain your thinking and build rapport with your interviewer. It also helps to have a bit of humor and warmness to you, so you don’t look like a robot whose sole purpose is passing a technical interview. This is also for you to be able to gauge the person interviewing you — are they a robot? Are they going to treat you like a human? Can they have some humor and warmness? These are important things to look out for, because it’s really difficult to judge your potential new coworkers as coworkers in the power dynamic that an interview creates.

A lot more can be said for other skills that do not shine through in a technical interview. I think a lot of the ways in which I am really smart are hidden by tech interviews, just like they were hidden in college. I'm still a good software engineer, and I am learning these technical interview skills as I go along. Interviewers should certainly be looking for other cues as to candidates' abilities: problem solving, collaboration, communication to name a few. So hopefully a technical interview is just one piece of the puzzle, and the other pieces are what really will make up the bigger picture in the way of matching a candidate to a position. I know I certainly wouldn't work for a company that didn't make an effort to assess me on other qualities, because I don't want to work with colleagues who were hired solely for their ability to ace the whiteboard.

4 Aug 2020: Notes on NoSQL Databases

Notes from watching Introduction to NoSQL, Martin Fowler

Problem: Object-relational impedance mismatch (Things are logically organized into objects in code, but those objects have to be split up and stored into tables and mapped by schema.) Hence ORMs.

SQL is designed to run on large servers, not large grids of small hardware which became dominant as large internet companies grew in the 2000s.

Thus Bigtable & Dynamo

NoSQL is hard to define, but it's generally non-relational, cluster-friendly, schema-less, open source.

There are four general categories of NoSQL:

  1. Column-family. Examples: Bigtable
  2. Graph. (Node and edge graph model. Good at moving across relationships between things (wherease in relational, you need foreign keys, joins.) Interesting query language for navigating graphs. )Examples: Neo4j
  3. Document (Typically stored as JSON. You can query into the document structure. An attribute is kind of like a key...) Examples: MongoDB, Dynamo
  4. Key-value (Like a persistent hashmap. You can usually store metadata about the records, though, so this becomes more like a document database.). Examples:

Martin Fowler calls Document, Key-Value and Column-family "Aggregate-Oriented" databases.

Though NoSQL is schema-less, there is an implicit schema in NoSQL dbs that becomes clear when you start querying! For example, querying MongoDB for a particular JSON attribute.

Big advantage of aggregate-oriented is that you can store multiple pieces of information together in a single record, whereas in a relational db, you would need to have two tables and map one row from a table to multiple rows of another table in order to associate the information correctly.

Aggregation makes clustering easy because you know what will need to be stored close together.

Column-family is also aggregate-oriented.

Aggregation has a drawback: really difficult to slice and dice your data after you've decided on the aggregation.

So how do the different NoSQL models handle relationships? Aggregate-oriented databases are similar to relational dbs, in that you have to associate by attributes or values in the data. Graph databases are oriented toward relationships since they are node-edge models. This is good guide for deciding which db to use.

Consistency in NoSQL: SQL=ACID, NoSQL=BASE. But Martin Fowler isn't a fan of this. Graph db's are ACID. Aggregate-oriented databases don't need ACID quite as much. Aggregations are transaction boundaries! So you shouldn't really need to lock more than one aggregation at a time. If you update multiple documents at a time in document dbs, then you need additional atomicity.

In general, transactions are achieved by letting a user retrieve a versioned record, updating that versioned record, and sending it back. Then when two users have written at the same time to their own copy of the version, you can do whatever conflict resolution you need to.

Logical consistency vs. replication consistency

What if two nodes lose communication with each other? Do you allow both to modify the same object, or neither? This is a tradeoff between consistency and availability. Dynamo needed to guarantee availability in the shopping cart for Amazon.

CAP Theorem: Consistency, Availability, PartitionTolerance. Pick 2. (aka, if you get a network partition (communication failure between nodes), you can either have availability or consistency.) But this is on a spectrum, so it's not always just one or the other. Even if the network is up, you have a performance tradeoff if you want to be 100% consistent since it takes time to absolutely guarantee consistency across nodes. So it's a safety vs. liveness in concurrency issue.

So when to use a NoSQL database? Two drivers: 1. Large amounts of data that can't fit well into a relational database. 2. Natural aggregates, for example when publishing news stories that have metadata and content together. Another reason includes analytics, as an alternative to datawarehousing.

3 Aug 2020: DigitalOcean Databases

Just some notes as I try out DigitalOcean's database-as-a-service platform.

General observations:

What I'm doing:

  1. Created my DO account
  2. Created new Postgres 12 db
  3. Did some initial configs on it (IP allow listing)
  4. brew install postgres because my college MacBook, where I did all of my college db work, is asleep
  5. Connect to my new db from the psql client. Pretty smooth; basically what I would expect.
  6. Found a dataset on Kaggle in CSV format, created a table for it in the db, used \copy command in psql to copy the data in. Now I can query Netflix titles!
  7. Generated OAuth API Bearer Token and used it to list my database clusters via HTTPS request (note: not immediately clear to me here that I would need to list all clusters first, then get the cluster id, then get more specific details from it. Wish cluster id was available on UI.)

Questions:


1 Aug 2020: Checking in on my 2020 goals

Wow, it has been a while! As the COVID-19 pandemic completely took over life as we knew it in 2020, I was making sporadic progress on the goals I outlined above. I also have some more clarity on what I'm wanting looking into the future career-wise. So here's what I've been working on:

So I guess I've been working on less than I thought?! But I have been busy. At work I have the typical workload (+, you know, the pandemic).

I've also been working on brushing up on SQL and concurrency. I apparently remember a lot about spin locks and not a lot else (probably because I haven't done much of it outside of college OS class).


1 March 2020 Lambdas!

Today I was wandering around my little AWS playground and I was kind of bored by what I was doing (messing with flask and security groups) so I decided to see if I could get a Lambda up and running. Some days I love following tutorials and absorbing every word. Other days, I want to just break things and see how far I can get. Today was one of those days.

So what is a Lambda? It's a function-as-a-service. You give AWS some code to run and it figures out the rest for you. How's that different from running code on EC2 or in a container? Well, it's even easier. There's no infrastructure, OS, or even runtime to worry about. You give AWS the code -- literally the source code -- and it takes care of the rest.

Well you do have to give it a trigger. Otherwise it would just be code that AWS doesn't know when to run.

Here are (roughly) the steps with commentary, for doing what I did today:

  1. In the AWS Console, open Lambdas and Create Function
  2. Author from scratch (it's more fun this way...?)
  3. Choose a runtime you like (Python here)
  4. Permissions: Create new with basic Lambda permissions
  5. Add trigger: API Gateway (trigger this function on an API call)
  6. Create new REST API; Open with API Key

So at this point I ostensibly have my own API Gateway backed by a lambda function. But how do I use it? On the function's Configuration tab, there's a Designer diagram. Click the API Gateway icon. It will show some configurations, including the API endpoint. So open up a new tab and try it out!

Well, you'll find that it doesn't work. You get a nice little {"message":"Forbidden"} response from your own API because you set the authorizations to Open with API Key but you didn't provide a key!

Get the key from the AWS Console's API Gateway service and navigating to API Keys. The API Gateway page is also where you can disable API Keys or enable IAM authorizations for your APIs.

One way to provide the API key to the endpoint is in the headers as an x-api-key. You'll notice that curl $ApiEndpoint gives you back the same forbidden response, while curl -i -H "x-api-key: $Key" $Endpoint invokes your Lambda!

At this point your Lambda will execute the boilerplate Hello World python code that gets populated when you create a python Lambda but you can edit it to do whatever you want!


27 Feb 2020: On Writing Good Unit Tests

I still remember the day I learned about automated testing. I was in my junior fall semester of college, taking a class about software engineering. All of my other classes up to that point had been very "CS"-y and not very, well, "practical." This particular class required a semester-long software engineering project with CI/CD, testing and (somewhat humorously) UML diagrams. A guest lecturer pulled my group to the side and asked us if we knew how to write a test. And thus the magic of Assert() was revealed to me. Today as a software engineer, I pride myself on the quality of my tests and the efficiency of my CI/CD pipeline. I don't consider a feature "done" unless there are tests. I will not tell you my code is "ready" if I didn't push the play button and wait for a versioned, tested executable to come out of the other end of the pipeline. I definitely think these are some of my strenghts as a software engineer but there are also some things that I'd still like to improve on in this area.

It's kind of hard to definitively say what a "good" test is but there are some easy ways to identify bad ones. So we can start there. Bad things to do in tests:

Here are my tips for good testing (but this is by no means comprehensive!):

Of course there are still things that mystify me about testing. Here are some of those things:

In my personal experience, comprehensive unit testing has saved me a lot of trouble. It's costly to do up-front and your non-developer coworkers might question why simple features tend to take you so long, but you will more than make up for the time when your program works well and you can identify bugs very quickly. I love my end-to-end tests and I would hate to try to "verify" that my program was working by crawling through application logs and hoping everything looked right. When in doubt, write a test! Your future self will thank you.


5 Things I Want to Learn (or improve) in 2020