>>21825There have been roughly 5 waves of form factors of computers:
1. mainframes
the operating systems written for mainframes tended to be extremely expansive and general, and thoroughly engineered from first principles. they also provided tons of facilities for deduplicating effort between programs. this is exemplified by multics, where there is no distinction between memory pages and disk files, and where since all memory regions (called segments) were secured with a sophisticated ACL-based permission system, they could be shared between programs and users. the multics people basically tried as hard as possible to save on programmer labor. they defined standard interfaces between all programming languages on the system, and also, made all languages linkable to the shell, thus making all libraries on the system callable commands. due to the memory access being protected by the hardware, it was possible to write the kernel in the same way as the rest of the operating system, thus there was not a distinction between kernel mode and user mode like in modern operating systems. the kernel was just another library that implemented the kernel's functionality. this makes it similar to the modern notion of an exokernel, but preceding it by several decades. multics was so resilient to hardware failure that it was possible to split the mainframe into two computers while it was still running, like a biological cell dividing, by removing hardware pieces and reassembling them elsewhere. it even had a graphics system, though one which is quite alien to the normal understanding of it. basically the graphics system more resembled a CAD program than anything else, but this was combined with a standardized ontology or inventory which was shared between programs. thus for example if you were to define a model such as a teapot, then there would be one "teapot" object on the system shared between all programs, instead of being created and recreated over and over again by different programs. here again you can see the great attention paid to labor saving for programmers. multics had a vision of computing becoming a public utility, charged at a flat rate of computing time. it is very obvious to see the socialist implications of this line of thinking.
https://www.youtube.com/watch?v=Q07PhW5sCEkhttps://www.youtube.com/watch?v=L8Bay04lCxshttps://multicians.org/features.html2. minicomputers
our society was not set up to facilitate public access to these mainframes, and they were jealously guarded inside military installations, universities, and huge corporations. so along came minicomputers, which costed only tens of thousands of dollars instead of millions. minicomputers were absolute slow underpowered pieces of shit compared to mainframes, so they couldn't possibly run mainframe operating systems, and their design was much more primitive. ken thompson and dennis ritchie needed something to run on their PDP-7 so they started copying a lot of the design ideas of multics but stripping out many of the features. this inherently made the OS much harder to program and use, and less secure and resilient. most of the labor-saving facilities of multics were ripped out, like the memory object sharing capabilities, which meant every coder for the system had to re-code the same solutions to the same problems over and over, instead of having them baked into the OS. some problems, (like error recovery), were just never solved at all, and all you could get were crash dumps. the original unix was very primitive, resembling a more elaborate DOS. minicomputers eventually became fast enough to run the original mainframe OSes, but by that point, minicomputer OSes had become hegemonic (and also there were difficulties with hardware compatibility and software licensing). so instead, they started adding features to the minicomputer OSes to make them somewhat more mainframe-like, like adding dynamic linking, demand paging, resilient filesystems, and some attempts at clustering facilities, but it was always a shitty and incomplete reconstruction, as they were hacks piled on top of an insufficient base, instead of the result of a systematic design. various projects attempted to fix many of the flaws of unix due to so many features being ripped out of its design, such as Plan 9 and GNU Hurd, but none of them gained traction because the standard unix design became hegemonic. unix themselves had a collectivist vision, they called it "communal computing". It was kind of a cross between the labor-saving desires of multics programmers and proto-free software sentiments.
https://www.youtube.com/watch?v=XvDZLjaCJuwhttps://web.mit.edu/~simsong/www/ugh.pdf3. microcomputers/PCs
these only costed around a thousand dollars instead of tens of thousands. these were even shittier than minicomputers, glorified calculators at first, and then developed into DOS and early windows versions. these OSes were even again shittier than minicomputer OSes, basically a thin layer to expose devices to an app with no multiplexing or security or anything, or much of a programming environment. that windows descends from a single-user microcomputer OS and linux is a clone of a minicomputer OS (unix) is one of the most major reasons why windows is shittier than linux. all the crap windows shoveled on top of a shitty base to compete with unix-like OSes made it break frequently and made it insecure and bloated, which i'm sure you've all experienced. you ever wonder why windows updates require app and system restarts and linux doesn't? it's because on linux, you can replace a file, and the running program will continue to use the original version of the file. thus you don't need to restart it. this is only possible if you can refer to a file by two different handles (called inodes), and windows never got this feature because it descends from a shitty primitive dos-like system where the feature was never added. and now they can't fix the issue, because that would break compatibility with everything. this generation was the epitome of individualism in computing.
4. smartphones
these follow the same pattern. they started out as more primitive and slow than microcomputers. when the G1 came out, a new OS had to be written for it (android) because running linux or windows on it was a non-starter. these dispensed with ALL multi-user capabilities, all programming facilities (it is not possible to modify the OS or applications without cross-compiling and flashing it), etc. they basically became a thin client for cloud services. it is not possible to do any real productivity work on these OSes. these are basically toys or "content consumption devices". you don't have root permission by default, you don't have a host of networking services (NFS is specifically blocked out of the kernel by default by google), you can't control the services running on the device, you can't control the arrangement of the windows by default, either one or two apps onscreen only. you have to hack the thing to change the software, which breaks hardware-based attestation, which breaks apps. this is not even individualist. you have no control over your device. you can't change the OS from android to something else. half the system breaks if you stop using google services. this is computing as enslavement. but we're stuck with them due to market inertia and entrenched interests of manufacturers. smartphones at this point, right now, are powerful enough to run linux. the problem is, microcomputers eventually gained a host of standard interfaces like ACPI and BIOS which made it trivial to move OSes from machine to machine. in an attempt to make smartphones as simple as possible, the manufacturers ripped all of that out. so for a phone, the kernel must be custom-built for THAT type of phone, because the kernel image must be given a thing called a device tree that tells it how the phone's hardware works. this makes porting linux to smartphones practically impossible because all the information on hardware layout is secret and all the drivers are proprietary. it's effectively like porting coreboot to a microcomputer and replacing the BIOS, it's that difficult. you could rebuild the kernel with those drivers, but you can't upgrade that kernel from the version the drivers were compiled for. for linux to flourish on smartphones, they either need a BIOS, like minicomputers, or all the device trees and drivers for them need to be made publicly available, and neither of those options are in the interests of the companies who make the software or hardware for them.
5. IoT/smart gadgets
these are even more primitive than smartphones, and often incapable of running linux, and run something like fiwix instead. always embedded. these are glorified appliances, they're so primitive. installing new apps is usually not an option. changing them involves building a whole OS payload and then finding points on the board to solder in a programmer chip to flash it. (both of which are just straight up reverse engineering). botnet. usually no control over the relationship the cloud has with your data.
the pattern of every new generation of computers is:
1. some of the best OSes ever written were among the first
2. a new computer is released which is way smaller and cheaper, but can't run the last generation's OSes because they're too shitty
3. the new generations of computers become powerful enough to run the last generation's OSes, but don't, because the new OSes written for them have captured the market. since the new OSes are shittier than last gen's, they try to cope for this by retrofitting some features from last-gen OSes in a shitty way
4. computing as a whole is now in a far shittier state.
5. repeat. GOTO 2