March 30th, 2020

Praising the wrong technologies

The other day, I took a look through the latest edition of Computer Organization and Design: The Hardware/Software Interface, by David A. Patterson and John L. Hennessy, which has long been considered one of the preeminent textbooks on computer architecture. I'd seen this book and glanced through it before, but I felt like I might have some deficits in my learning for not having read the book more thoroughly, so I thought I'd take a while to sit down with it and see if I might gain from reading it cover-to-cover, as I believe people generally should with textbooks on a subject which they intend to have some knowledge in.

Fairly quickly, I discovered some alarming (to me) attitudes in the book about computers and computing. The book's second chapter is called "Eight Great Ideas in Computer Architecture", and it highlights what the authors consider to be among the most important ideas in the history of computer design, so important that they actually make up their own symbols for these ideas so that they can subsequently bring up these concepts throughout the book and signpost them through the symbols they created. This, in itself, seems like a clever idea, but the problem is that most of the "great ideas" which the authors so praise are actually terrible ideas which have destroyed computing.

The "eight great ideas" are as follows (the names are from the book, but the descriptions are my own):

- Design for Moore's Law: Under the assumption that computer technology keeps developing rapidly, computer designers must anticipate that their designs will soon be out of date, and so they should design in such a way that allows for such expansion.
- Use Abstraction to Simplify Design: Because computer hardware and software are often very complicated, designers should use abstraction to simplify their work, hiding more complex details from themselves so they can focus on simple things that their minds can understand.
- Make the Common Case Fast: Because most people use their computers for a specific set of purposes, designers should optimize for those common uses, ensuring that the most people get the most benefit out of the computer, as opposed to niche usages which are not as important.
- Performance via Parallelism: If you have many things doing the same task at once, that task gets done faster. The book uses a symbol of a jetliner to symbolize this, because the jetliner has several engines all pointing in the same direction, which help it to fly faster than it could with one engine.
- Performance via Pipelining: One could call this "performance by serialism": Whereas parallelism is about having many different things next to each other, pipelining is about having several different things lined up in series with each other, such that they form an unbroken line: One process feeds directly into another.
- Performance via Prediction: Again using the principle that most people use their computers in a specific way, you can make a computer faster by predicting what people will want and doing it before they ask for it. That way, when they (for example) press a button, you can have the results of that button-press prepared in advance, rather than waiting until they press the button to prepare those results.
- Hierarchy of Memories: This essentially refers to the different levels of cache memory available in most processors today, the idea being that a small supply of very fast cache should be immediately available to the processor, with somewhat slower but larger cache kept a little farther away from the processor. This optimizes for space, speed, and cost (faster cache is more expensive per byte).
- Dependability via Redundancy: You should have spare parts for critical components of a system, so that if parts break down, the system can continue functioning without needing to wait until the broken parts are replaced.

Am I the only one who sees a problem with the praise of these ideas as the greatest ideas in computing?

First of all, except for the idea about abstraction and the idea about redundancy, every one of these ideas is about performance: Making the computer fast. From reading the list, you'd think that the only important thing about a computer is that it runs quickly, in which case I'd advise the authors to get rid of their computers and invest in a flashlight, as nothing moves faster than light. The idea that computer technology keeps advancing is an outdated paradigm from the 20th century, a time when rapid improvements in microelectronics kept resulting in logarithmically faster CPUs. Today, we've long since passed the point where most computers are fast enough and big enough for most people's common usages. If you should optimize for the "common case", as the authors claim, then you should stop making computers faster, as they are already faster than most people need them to be. The idea that the most important thing about a computer is its speed is one of the most stupid and outdated ideas you could possibly have in the world today, equivalent to thinking that the only important thing about a human being is how much money or power they have. What a terribly one-dimensional way of looking at the world.

As for abstraction, I've written many times in the past about what a stupid idea this is. "Computer professionals" spend most of their work time in developing ways to prevent themselves from having to know something about the computers they work with because they are too stupid to understand those computers, then they proudly present "new" technologies whose only new features are that they prevent programmers and designers from having to know what they are doing. Whenever a newer version of an operating system comes out, the key feature of that new version is that it came up with new ways for the programmers to not know how it works. Why are we expected to buy new versions of software to enable programmers' lack of comprehension? This is a "great idea" in the same way that slavery is a great idea, except it's worse, because at least with slavery, something useful gets done.

The only useful idea among these "eight great ideas" is reliability through redundancy, and while it's true that computer reliability through redundancy is an important concept, to highlight this as one of the most important features of a computer is to catastrophically misunderstand not just computers, but in fact any device. Imagine that an inventor proudly declares that they have just invented some new machine, and when asked what the purpose of the machine is, they reply: "It doesn't break down!" That's not a function; that's a nice feature of a machine that does have a function, but the fact that computer designers now think it's a great idea to make computers whose core function is to keep running shows that they have no idea what a computer is actually for. A computer is for receiving, processing, and presenting information, not for just redundantly running with no purpose.

If you want a great idea of computer architecture, here's a great idea for you: A computer is a programmable information device which processes information in the way that the user tells it to, interfaced to some memory which can store information and some input/output mechanism. That's what a computer is. This is a great idea of computer architecture; in fact, it's the great idea of computer architecture, and any computer which doesn't meet this basic description is not a computer. If a computer doesn't let the user send opcodes directly to the processor, that's a bad idea. If the computer doesn't let the user read from or write to memory, that's a very bad idea. And if the computer doesn't let the user access input/output, then that computer is pretty useless.

It seems to me that people these days are praising the wrong technologies. There is still this massively obsolete idea that technology is developing rapidly, and that every new iteration of hardware or software is "better" just because it's newer. In fact, this has not been the case for about 25 years (as of this writing in the year 2020). A computer exists so that people can make use of it, and if a computer is faster than its predecessor but less usable, then that is a loss for computer design, not a gain. The industry is going in precisely the wrong direction, because it has already sold whatever people can make use of, and so it needs to come up with some useless gimmicks which no one needs but which can be sold overpriced in order to stay in business.

And then, in the midst of these thoughts, Half-Life: Alyx was released.

HL:A has received nearly universal praise as a game. It looks great, it plays well, it does interesting things with the existing Half-Life storyline, and for those who play it, it's fun. However, most people, even people who are gamers, haven't played it and can't play it, because it requires a VR headset, which most people don't have. I'm not the only person who has criticized the game for its hugely controversial decision to be VR-only. This was a decision which didn't need to be made, and was made seemingly arbitrarily, for no reason other than that Valve could claim it had made something which no one else had made.

This decision is baffling, mainly because, again, most people don't have a VR headset. It's a bit different when services expect you to have a smartphone, because most people do have a smartphone by now; even that is something I object to because it excludes people who aren't willing to pay hundreds of dollars for a piece of hardware they don't need, but at least one can correctly say that most people have a smartphone and so the service is accessible to most people. In contrast, Valve's decision to make HL:A VR-only is excluding the majority of their potential customers. Was this because they just don't care and figure they can afford to snub the vast majority of their target audience, or did they assume that people would want to play the game so much that they'd go out and buy a VR headset just to be able to play that one game? Either way, it's a terrible move.

Half-Life: Alyx is not even all that innovative. Sure, its form as a VR game allows it to do a handful of creative things which non-VR games can't do; at the beginning of the game, someone points a firearm at you, and you're required to put up your hands in the universal "hands-up" move of surrender, which is something that VR gloves or controllers enable you to do in a way that a keyboard or mouse doesn't, and the game tries to come up with a handful of other tricks like this to justify its status as VR-only, but all of these are really just gimmicks. At its core, HL:A is just another first-person shooter, and could easily have worked with the standard controls that FPSes have been using for decades now.

Of course, as with any new technology, people have found ways to be creative which the technology designers didn't expect. There's a widely-shared YouTube video of a math teacher holding a math class using the game's ability to write on surfaces, and it looks pretty cool, but again, this is just a gimmick. I'd even be okay with such gimmicks if they didn't require people to invest a lot of money in something which is, again, mostly just a toy.

One of the most important things about a computer is its flexibility. A computer isn't just something for sending e-mails and playing games; it's a general-purpose information and data tool. If you can use that tool for playing games, that's fine, but the reason why most people have a computer is because they can use it for other things as well, from researching information to paying their bills. Because of this generality, it's important for the parts of a computer to be repurposeable; you can use a keyboard to write an e-mail to someone, type in a search term that you're looking for, enter data into a spreadsheet, or provide movement inputs for a game. The reason why a keyboard is important is because it's highly applicable to many different types of usages. By contrast, a VR headset has a very limited set of usages; you can't use it for most things that you'd do with a computer. This makes it a very specialized item which most people can't justify the expense of, and for that reason, you shouldn't make something which requires people to have it.

I've always complained about games that require joysticks, and that's coming from me, an avid patron of flight simulators, which is the one type of game where having a joystick really makes sense. As much as I love flight simulators and understand that flying in them with a joystick makes a lot more sense than flying with a keyboard or mouse, I have always criticized sims which require a joystick. This is partly a matter of expense, because it forces the player to buy something which they may not need, and a basic joystick costs only around 20 bucks, whereas a VR headset costs hundreds. Who can justify that kind of expenditure for something that is, fundamentally, just a toy? Especially when it's a toy that you'd probably buy just for the sake of playing one game. If there were a lot of games offering good VR experiences, that might be one thing, but Half-Life: Alyx has been described as VR's killer app, meaning you might buy the hardware just to play that one game alone, which is awfully wasteful. To be sure, it's okay to have VR support in games: To have the option of playing a game with a VR headset if you have one and want to play the game that way. What's not okay is making a game require a VR headset, a specialty apparatus which in many cases is more expensive than an entire computer.

Computer technology reached its peak years ago and has been stagnant since then, which wouldn't be a problem if people didn't have this bizarre drive to keep creating "new" things for absolutely no reason. Today, people are praising the wrong technologies, getting excited about portable devices with tiny screens and uncomfortable on-screen keyboards when they could get full-size monitors and keyboards which are much more comfortable to use for much less money. Technology is no longer about creating better devices that help people; now it's just about producing more useless junk to keep a handful of companies in business. And what's sad is how eagerly people gobble it up, getting excited about each new product release as if it were something which will improve their lives, when in fact everything which people need was already developed in the previous century. It really shows just how bored people are with reality and their everyday lives, how desperately they cling to trivial distractions to supply them with a meaning of life.