“The unexamined life is not worth living.”
— Socrates
What is a computer?
This is a difficult question to answer. Its both immediately obvious in a tautological sense, a computer is a computer, you have one in your pocket and on your desk, but a deeper concept can be hard to come by. Digital ink in great quantities can be spilled in pursuit of a complete answer.
To cut though that, I think Jack Sparrow really captures the essence of it, which I’ve put to my own interpretation.
“Whatever we want to do, we'll do. That's what a computer is, you know. It's not just a processor and memory and a keyboard and screen, that's what a computer needs — but what a computer is... what Unix really is... is freedom.”
I find that puts it quite well. Computer mastery is freedom. Its a freedom materially ubiquitous, with deep-set philosophical roots, yet one which even the majority of domain experts don’t properly enjoy.
We all use computers everyday. Open source as a philosophy is deep-set in the computer world, nearly every technology is built from open components. Yet, computers as most people experience them, even in Silicon Valley, look something like hideous mind-control addiction gremlins that constantly spy on us from a digital aether that routes to who knows what kind of hidden dark power network behind the scenes. Oh, and the initiation rite to access and understand any of it is restricted to those who can grasp and understand an arcane runic language where a single error can render the whole of the effort inoperable.
This doesn’t have to be so. There is another view of things. One I want to share with you. This view is something akin to the wisdom of the ancients, a viewpoint authored by people still alive today, yet even so still half forgotten. Let us remember!
The First Principle
Every Single Bloody Concept in the Entire Universe of Programming is JUST a Different Word for Shortcut.
Abstraction is just a type of shortcut that jumps a layer up in a model of the problem to hide the complexity of the layer below so that all you have to interact with are the important parts you want to change.
A code library is a series of abstractions of mathematical functions that provide an interface, ie, shortcut you can call to do that thing in that library… Without having to write that code and debug it.
An API is an interface to a code library that has a very clearly written documentation you can work from to plug into the shortcut it provides.
A Macro is a keyboard command or text shortcut that expands into a more realized and fleshed out concept that was encoded before, so that you can add the specific and relevant details to it and thus skip the need to write out, precisely remember or constantly re-optimize solutions to common problems.
Both pointers and variables are roughly flexible shortcuts to access a specific predefined piece of memory, or whats in it, which can represent just about anything.
A keyboard hotkey is a shortcut to send a command of something commonly done to the interface to make the process of interacting with the computer more efficient and consistent.
A Command isn’t something you send a computer, its a shortcut calling up a block of code the computer already has.
Programming languages and compilers are a shortcut that provides abstractions and a library of well constructed solutions so that you don’t have to work with all the intricacies and drudgery of machine code.
Machine code is a shortcut that offloads the mathematical work of doing calculations by hand. The first “computers” were humans, and some of them helped put us on the moon.
Vim is a modal editor when the modal state is a series of shortcuts to common text editing problems and tasks.
Efficient code is a shortcut to getting the computer to do what is desired without a lot of human or computational waste.
The Graphical User Interface is a ‘shortcut’ to interact with only the specific sequences of code the programmer has decided should be accessible to the user.
The Terminal is a shortcut to directly access the specific sequences of code the user has decided to access.
This is the first principle. Everything in working with computers is about optimizing our relationship with shortcuts. Computer mastery is a cheaters paradise. Its about collecting all the best shortcuts for yourself, and in the Libre ideal, sharing them with others. This is why the concept of Open Source is so powerful, and the vision of the Unix Philosophy so profound. Give everyone the best shortcuts.
It’s my opinion, although I haven’t asked him, that this underlying experience of sharing concepts and code, the collecting of shortcuts, is what inspired Richard Stallman to write down the founding principles of Free Software, beginning the Open Source movement as an intentional project. As evidence for this case, this is why he loves emacs, its ability to incorporate macros - putting blocks of shared and reusable code at ones fingertips.
The Second Principle
The Computer Can Do Anything — With a Single Keystroke.
This is what Turing Complete actually means. Once the computer has been programmed, and can do something, it can do that thing with a single keystroke. At least, that should be true, on Unix it is true, however, most computer systems are spectacularly restrictive to the point of excruciation.
This is the battle ground between Unix and Windows, between the Terminal and the GUI, between Open Source and Proprietary, between Hackers and Stiffs. A Unix Terminal on an Open Source system used by a Hacker can do anything, in a single keystroke.
Unix and the Terminal are nearly synonymous in the modern day. It is what makes it special, so let me make the terminal a fully realized concept. This will take some groundwork to fully realize, but don’t worry, I’ll be answering, in a fully realized way, the less poetical and more tautological end of that question which spurred off this whole writing. What is a computer?
The Biggest Error
“The Processor is the Brain of the Computer.”
I’m sure you’ve heard something akin to the preceding.
This sounds completely salient and straightforward, it makes sense. The processor does calculations, and isn’t that sort of like thinking, like what the brain does?
Toss that from your head, discard it!
I hate this analogy, and genuinely consider it the most faulty and misunderstood description in computing. Allow me to tell it differently, and in doing so, give you clear insight into the inner workings of these magic stones of silicon we’ve given life with electricity.
The brain of the computer is Memory. The processor is more like the fingers or body of the computer. Whereas its interfaces, like the screen or keyboard or stdout or APIs or even a robotic body are better likened to various tools it can wield.
The processor does what memory tells it to do. This is the fundamental mechanism of how a computer works. The thinking of the computer is contained in code, which is then acted upon by the processor. Feedback from the processor can change the code stored in memory, much in the way touching a hot stove gives feedback to your brain.
Armed with this corrected analogy, we can delve into how a computer works, and how to think about its mechanisms clearly, no matter which program you are working with.
The Universal Abstraction
Every program is a strip of memory that can be drawn into the processor.
This memory can cause actions to be taken, or be acted upon. Execution — the action, giving the processor commands, machine code. Or reading and writing, pulling in a strip of data in memory, performing operations on it, and spiting it back out, writing it.
Read, Write, Exec. Fundamental concepts, but lets work on the imaginative vision of this.
Everything the computer is doing is stored in memory. In a Turing Machine this is a strip of ones and zeroes. In modern machines memory is stored in pages. The word table is perhaps a bit more evocative, primarily in a spreadsheet sense, with columns and rows, but also sections of columns and rows to a repeating count. This can be clearly represented in the strip of ones and zeros with a red marks at regular intervals, that themselves have more distant intervals framing them. Each mark has an address associated with it, a number to the side, that can be called to pull that specific strip of memory into the processor. Fill the imaginary sky of computer memory above you with a two dimensional grid of software strips. This grid, loosely imagined, is the heart and mind of your computer.
The Boot Up Process
The bootstrap process of the computer starts with sending power to the motherboard, the electronic framework that holds the processor, and when the power reaches the BIOS ROM it begins sending a series of strips of code to the processor that load everything necessary to load the operating system into memory.
There are intermediate steps where the bootstrapping system is accessible for simple modification by the user, but the process ends by pushing the kernel into memory and directing the processor to being pulling a sequence of commands from a specific strip of memory in the kernel memory space. At this point the kernel has taken over, and it runs a bunch of checks, strips of memory that get pulled into the processor ask it to send signals to all sorts of systems on the motherboard, asking for a status report and exploring what capabilities the system it is running on actually has.
The Unix kernel then transfers control over to userspace, and calls PID1, the init process. In classical unix this is a human readable shell script, the RC script, where rc means RUNCOM, which can be taken colloquially as either run commands or runtime configuration. A very interesting concept to realize in calling PID1 is that the kernel is in binary, it talks in the command language of the processor. Whereas RC is in shell script, which are text characters which have no special meaning to the processor. So to run itself, RC tells the kernel to call the binary file at /bin/sh (file system root / binary executables / shell code interpreter) to feed that program the contents of the rc script. The shell then begins to run the RC script, and there are now three programs running on the computer. The Kernel, the rc script, and the shell it called to translate its text commands into machine commands the processor can directly understand.
In Linux the ProcessID1 is often systemd, a suite of mostly binary tools that manage processes running in the background of the system. (Linux also has openRC, runit, s6, sysVinit, upstart and even more init systems.) Whereas on apple Mac, PID1 is launchd, a similar management suite concept to systemd, rather than a shell script.
In either init model, the init process does configuration — changing files and bringing ongoing userspace services, daemons, into memory. These daemons do kernel like things, such as managing networking, or file systems, or subsystems like audio or specialty hardware. Once setup is complete, the init system forks (starts) a user login shell prompt.
The Kernel, An Origin Story
Before we go over to the user login shell, its quite useful to explain what the kernel is and a gist of how it works. It’s my belief that the way to understand the kernel, regardless of the operating system it comes from, is found in the origin of Unix, the early era of Timesharing operating systems. Computers had become sophisticated enough to do sequential work fast enough it took multiple people working with the computer to give it something to do all the time, and so not waste the massive effort it takes to manage it.
Timesharing took over from batch processing. Batch processing began when the computers became fast enough that it took several teams of programmers working on writing programs (mostly on punch-cards) to keep the computer busy. The programs would be assembled into sequences and run in batches, grouped together for efficiency.
I think its very effective to think of the batch processing computer room, with the multiple large and complex computational machine components that were in it, as the kernel, before the kernel was ever coded. The control mechanisms and decisions were built into the control room and machinery, and made by the computer operators. Interrupt events to switch out from a batch task that was taking too long, were done at the push of a button, and decided by floor rules. The interfaces for managing the hardware, were build into the system, and how to make them work was written in a manual - this is the equivalent of kernel drivers.
The batch world had a human kernel, just as the pre-digital computer age had human computers. The processes and culture of system operation was then considered and codified into the computer system, creating a shortcut that offloaded the management task of the computer to the computer itself. Elegant, genius even. Despite the appearance of cold logic, the history of computing is full of the subtle human influence, of putting the ‘human thing’ into the computer. Anything you can imagine the operators doing in a computer facility, very likely has a kernel analogy.
The human operators running the computer understanding of the kernel starts with managing hardware, drivers. But extends especially into timesharing. By interrupting programs on the processor with the system clock, the kernel habitually restores control to itself. Calling a preselected strip of its own memory to the processor before deciding what to feed the processor next, hundreds of times a second.
Humans as the kernel also includes memory management, loading programs into memory when they are called, and ensuring there is enough memory to go around. It also includes security, and ensuring programs are only allowed to access and do what they are allowed to by system design choices. Which has its origins in keeping batches — as in programs — separate, from before the first virus ever spread.
Kernel origins explained, lets get back to our story of a computer coming to life. The kernel hands off control to a program, the first program, that program sets up the computer memory for a user to take over, then starts up the user control interface. This means pulling known files that are at the end of the init script into memory. These files are the login shell executable, and its configuration settings. This can get quite complicated, as the login shell could be a graphical shell. The X11 window server. But with our clear picture of memory we can make it simple.
The UI, Its Just a Shortcut to Run Code
Think of graphics as a grid table, made up of pixels with the resolution of your monitor. This section of memory is the framebuffer, and holds a exact map of what should be displayed on your screen moment to moment. All the processes that render visual things for your display then write new data to the rows and columns. Each cell, or pixel, has a string of data that gives it color and brightness and provides a list of memory addresses associated to the pixel assigned to that address. You can layer frames for transparency and have multiples for multiple monitors. This simple picture of memory for graphical systems doesn’t perfectly map how the code works, but makes it far less alien than the magic that appears before us.
From here we have a GUI. The computer interface we all understand. The start menu is a shortcut listing all the commonly used programs, and the button is a visual block of pixel addresses in the framebuffer all associated with the same memory address group. When clicked, that framebuffer sends a signal to its worker process, which then calls the memory addresses listed for that pixel group. The processor then walks through the address contents, finding the associated program file wherever it is located in the file system, pulls the file into memory, and then sends its first strip of executable memory to the processor, to begin the execution of the process.
The same exact process happens when you log into a text prompt. The shell, often bash (Bourne Again SHell), takes a command, which it then pulls into memory from the file system, and feeds the first strip of executable memory into the processor, to begin the process.
Strips of Moving Memory is exactly what a computer looks like from the inside. You don’t need to think about the processor except as a black box that does the things contained in memory. When you’re working with code, or anything on the computer, the things the computer does are the things in memory, concentrate your understanding on that, visualize how it plays together.
A memory block can be a function, as you become familiar with that function, give it color and clarity in your mental model of the computer. There are now That Specific Function strips in your memory. Give data color and clarity too, watch it transform according to your function. As your understanding of your computer and the details of how it works matures, this mental model can continually grow with it. Accurately mapping the way code and the computer works out of the linguistic semantics of whichever specific technical computer code you are working with into the visual and imaginary side of your brain.
Use 20%! Use your brain in parallel! Offload the abstraction of the computer into a part of you that can think about it flexibly, but which isn’t directly involved in working with the language of code.
The Third Principle
The Computer Is YOUR Machine, It Should Have YOUR Shortcuts.
We understand the GUI, but the point of Unix is all about the shell. Its a programming environment, masquerading as a control interface for the computer. Not only can you, by typing a few words or a few lines, tell the computer to do about anything you want it to, you can also put that set of commands into a script that can be called by a single letter on your keyboard. Or at the very least, a single letter + ENTER.
alias d='$PATH/to/Do-amazing-thing-in-a-script.sh'
In a single line we contain an elegant description of the beauty of Unix. What isn’t fully unpacked in that single line is that if you use a terminal to work your computer, everything you do on your computer is just about this far away from being a script called by a single letter on your terminal.
echo “command -i just wrote” > file-naming-the-script.sh
To be fair, there is a lot of glue and wrapper work that makes any particularly useful or interesting script work, but I genuinely hope the directness and elegance of what this approach to using a computer actually means is not lost on you.
If you use it properly you only ever have to write down how to do something once on Unix. Once you’ve done the work once, you can recall it however you like, to do it again. There are better and worse ways to do that, to encode commands to do your shortcuts. This is where culture comes in: where hackers, open source, meetups, and learning from each other becomes truely vital!
The Wisdom of the Ancients
/etc/skel
Ok, in some sense this is a bit of a reach, but in practice I think its a fundamental idea, in fact the fundamental ideal, and has been lost.
.dotfiles
On your first day on a computer, you as a four year old baby, or whichever significantly more grown up version of yourself you have become, should work with source code on your computer, and modify it. Not write anything from scratch, but play with some segments of code put together by someone with a lot of experience and perspective and get the code working for you the way you want. Configuration and scripts.
This is why r/unixporn is so damn cool. This is what the culture of Unix and computing should be like! The core philosophies of different Unix ricing styles should be built into the system. (ricing, or rice, which comes from car detailing culture, but which might as well stand for Really Impressive Configured Environment, is the jargon term for a highly customized user account on Unix.)
Okay, okay — now.
Let me explain all this hype from the top!
/etc/skel is a directory built into the old school Unix file system where system administrators would put .dotfiles, ie, user account utility configuration files and scripts, to be auto generated into the home folder of new users added to the server they were timesharing on together. This was a kind of welcome process and built in knowledge and a workflow sharing process that has been largely forgotten with the move from timesharing servers to the microcomputer desktop, where almost every computer has one user account, (along with the root account superuser.)
.dotfiles are user configuration files, settings, and are so called because the Unix shell hides files by default if the file name is preceded by a period. This is where the configuration information pulled into memory by programs as they start is stored for each user. Its what defines the experience of the computer for the user, and what the workflow is like for them. As in, vitally important for what the computer feels like.
The default stored in /etc/skel is usually minimalist, and built around containing the necessary framework to ensure the user environment is stable and reliable. This is appropriate and sane. In fact, a default environment that adds a bunch of flavor, and modifies the default .dotfiles to have a bunch of features enabled is quite a problem and an incorrect way to approach designing a system implementation.
But we shouldn’t stop at /etc/skel. r/Unixporn should not be the only living culture of user .dotfile sharing, moreover the fundamental concept of this should not be an obscure part of the Unix userbase. Mature workflows and user experiences should be trivially accessible. This Is A Fundamental Concept. Here is what it could look like.
/etc/skel/default should be minimalist. But there should also be /etc/skel/admin, /etc/skel/pwrusr, /etc/skel/gui/ux-name, /etc/skel/artist, /etc/skel/video, /etc/skel/audio, /etc/skel/web, /etc/skel/writing, /etc/skel/prgrmr, /etc/skel/novice, /etc/skel/forkids, /etc/skel/accounting, and in all likelyhood, /etc/skel/emacs. Each a suite of configuration and scripts designed for specific common use cases and workflow styles. These should be built into every linux and BSD system, at the distribution level. Distributions wars are largely silly, but .dotfile workflows and user experiences? Profound, interesting and vital — worthy of long conversations, conversations on how to optimally use our computers — conversations which we aren’t really having. A profound depth of knowledge we aren’t really sharing with each other.
Yes, .dotfiles exist in great numbers on GitHub, its kinda fantastic as a resource. But many are for personal use, most aren’t crafted as mass market distributables. That requires an entirely different approach to the concept and design vision of .dotfiles. They should be collaborative enterprises, and they should be how we share and carry the culture of using our computers. How to setup a sophisticated suite of .dotfiles should be a part of the installation guide of every Unix wiki.
How I Would Design a Distro
Distributions don’t matter, code bases and dependency trees do. The user profiles in /etc/skel should have bootstrapping install scripts that take .dotfiles and dependency lists and call the package manager to run installation of the dependency tree so that all the programs expected by the user profiles are available on the system. A really elegant tooling might mean that different workflows could be built using just the tools in the base of the distribution, a suite of similar more minimalist profiles, stored in /etc/skel/base/ux-name, etc.
The core design of the .dotfile system should be distribution and system agnostic. It should be able to ask its system what system it is on, and then call the right package manager, and perhaps program list, as required. Once this becomes polished and popular, it should be up-streamed into various distributions, and they should start iterating on the theme, developing their own home grow and diverse .dotfile user experiences, based on the vision of the maintainers.
Most of the necessary support structure to do this already exits. GitHub is full of .dotfiles people want to share with the world. Arco Linux comes with sophisticated tiling user environments following a similar scheme to what I’ve outlined. One of the more popular Linux youtubers, DistroTube is developing his own ricing based distro, DTOS. There are .dotfile bootstrapping scripts like LARBS.xyz from Luke Smith.
All that is needed is organizational effort. We’ll need some creative vision to consolidate on a series of good first draft concepts of the different .dotfile workflows and user experiences, so we can package all of it in an elegantly agnostic way so as to be easily up-streamed. Then maybe some tooling work to programs like adduser or useradd and such, to adapt to a more complicated /etc/skel arraignment and done. The core concept can spread everywhere.
World as it should be.
For Smart People
“Ain’t Nobody Got Time Fo’ Dat!”
How efficient can you get if everything you want to do, no matter how complex, is only a few key presses away? Almost no one knows. But it is absolutely the birthright of every Unix user. Its the central ideal, along with sharing our shortcuts. We should build our culture around this. Its smart.
This is the smart way to use a computer. Its also an easy and elegant way to use a computer, it is even the way people used to learn to use computers. But the knowledge of it has been lost. Not completely, but the grok of it is rarefied.
“I use the terminal and a tiling window manager because its efficient.”
This doesn’t even begin to communicate what is actually going on, or how to get there. It can also come across as some kind of 1337 H4X0R mentality. Everything is spectacularly obfuscated, and inaccessible without an obnoxious grind through useless tutorials or advice.
Depictions of computers and how to use them are either too dumbed down, or too expert oriented to be useful, this is nearly all the content that exists out there for people to learn from. The computer itself is the tool we should be learning from, how to use it should be built into, not just the manual, but also the user experience. Scripts and .config files are their own self contained tool-tips, especially with good documenting comments, and commented out options.
Learning through abstract isn’t particularly helpful, getting hands on is powerful. Getting hands on the right way, is both powerful, profound, and rare. It should be common, moreover it should be obvious, and we should understand what is necessary, right and helpful in the process. That is going to take some smart people to pull it off.
Lets be smart together.
Insights on how to develop and participate in a truly better world coming weekly!
> Distributions wars are largely silly, but .dotfile workflows and user experiences?
100 million percent!
> Profound, interesting and vital — worthy of long conversations, conversations on how to optimally use our computers — conversations which we aren’t really having
1 Billion percent.
> conversations which we aren’t really having
This makes me want to cry.... so true.
> All that is needed is organizational effort.
Yes! (This is so incredibly difficult to achieve.)
> World as it should be.
Completely agree. It's baffling that such an elegant solution doesn't already exist.
(But it does not surprise me because a good knowledge management program does not exist either. Conceptually the problem and the solution you discuss in this article are exactly the same.)
> We should build our culture around this. Its smart.
For people who share the same goals, have the same roles and work on the same type of projects and tasks. Absolutely!
The methodology would be the same for all. The specifics would be different depending on the purpose.
> But the knowledge of it has been lost. Not completely, but the grok of it is rarefied.
I'm not sure it was ever properly documented. But that is precisely the problem with building upon knowledge. You first have to precisely document it and make it (extremely) easy to find, easy to split, and easy to expand upon.
> The computer itself is the tool we should be learning from, how to use it should be built into, not just the manual, but also the user experience. Scripts and .config files are their own self contained tool-tips, especially with good documenting comments, and commented out options.
100%
> it should be obvious
It's only obvious when people achieve clarity of purpose (which, is a lot easier said than done).
> That is going to take some smart people to pull it off
I'm not sure smart people is the most important attribute. It is important, however I would rate these attributes higher (In order of importance):
- Thoughtfulness (which will result in many good feedback loops)
- Clarity (of purpose and communication)
- Consistency (Good record keeping)
> Lets be smart together.
I'm not smart, but I aspire to be thoughtful, and am trying to build systems to compensate for my shortcomings (which are many!).
I'm only as far as 'The First Principle
Every Single Bloody Concept in the Entire Universe of Programming is JUST a Different Word for Shortcut.' but had to put my like in at the moment. Somehow I never have heard that. .... need to pause on this but also a typo...'To cut thought '. I'll be back. :)