Inevitably...
But can it play Crysis?
Mine's the one with the collection of 20 year old games in the pocket --->
3027 publicly visible posts • joined 1 Mar 2007
I remember with Virgin and EE ages ago having to tick the 'I'm an adult and allowed to see adult things' box on my ISP and phone accounts.
However, how many households have a separate ISP for the adults and the children? Unfilter the DNS for adult members, and you've unfiltered it for the kids too. And even if you're not a regular user of porn, 'adult things' is a much broader category than just that. Leave it filtered for the kids, and there's a lot of non-naughty content you may need access to but are blocked.
Just curious: for anyone in the education system these days...
When I was at school, computing facilities consisted of a second hand PDP8/e and a couple of SWTPC 6800 systems that the head of Computer Science had built himself. By the time I left a (singular) BBC Micro had been added to that collection.
Now we're way into in the era of networked school computers, do they encourage the use of school VPNs to enable pupils to access networked school resources for homework when off campus?
Edit: I suppose such VPNs could be fairly easy to identify and be whitelisted.
Also in the crazy but true file...
The cheapest way to get a 1970s/80s home computer to talk to a modern TV is often to use a Raspberry PI Zero as a RGB/Component video to HDMI video converter. That same PI could emulate the host computer many times over on its own.
IIRC, CP/M's a pain because the Z80 needs ROM at 0000 to start up, but then CP/M needs RAM at 0000, so the boot procedure has to start up, move the ROM, and page in the RAM in its place.
Not too hard to do if you're working from scratch, but needs some work with a soldering iron and a bit of extra logic patched in on a flying lead if you're retrofitting it to something it wasn't intended for.
"putting up tens of thousands of comsats at a couple of tons each."
Geostationary communication satellites tend to be relatively massive (typically 3000-7000kg).
Low Earth Orbit communications satellites, which are the ones launched in large constellations, such as current generation Starlink and Iridium come in around the 800kg mark. Earlier generation Starlink were around 300kg each.
Indeed, the world is not short of ISS Mockups. There's a dry one too at the Johnson Space Center
https://www.nasa.gov/johnson/space-vehicle-mockup-facility/
and also various Russian mockups.
"What’s even more shocking is the fact, that none of the commenters seem to have caught the blunder yet"
Actually Bebu sa Ware was the first to point out the true nature of 8086 segments about 12 hours before you posted this.
One of the sources of annoyance with the way segment registers worked is that when you were compiling software you had to select the memory model you wanted - if I remember correctly you had a choice of 5:
everything (code, data) shared the same 64K segment.
code and data could each be up to 64K, but in different segments.
code was up to 64K, but data could be larger.
code was as large as you wanted, but data was up to 64K
code and data could each be larger than 64K.
The problem that resulted was that let's say you opted for one of the smaller memory models in version 1 of your software, and then your program or data requirements grew and you wanted to create a more capable version 2 - it could suddenly mean that either or both of your 16-bit data or function pointers could suddenly change to 32-bits, which had the capability of royally screwing things up, particularly if they were parts of data structures, and meant the sizes of those structures changed. If serialising data to save it consisted of just dumping a block of memory to disc (and remember storage was limited and speeds weren't that great so it was common to copy it out the fastest way possible), you could easily create incompatibilities between data from old and new versions.
Incidentally, long pointers were often stored as Segment:offset. Since the segment register was 16 bits, you have the added inefficiency of needing 32 bits to represent a 20 bit physical address, and as someone pointed out elsewhere, that meant a whole load of different segment:offset combinations representing the same physical address making long pointer comparisons a pain.
"When I eventually read the text I was struck how derivative the material was"
When I first came across Harry Potter, another novel, written some 29 years previously, immediately came to mind. This is Wikipedia's introduction to A Wizard of Earthsea by Ursula K. Le Guin.
"It is regarded as a classic of children's literature and of fantasy, within which it is widely influential. The story is set in the fictional archipelago of Earthsea and centers on a young mage named Ged, born in a village on the island of Gont. He displays great power while still a boy and joins a school of wizardry, where his prickly nature drives him into conflict with a fellow student. During a magical duel, Ged's spell goes awry and releases a shadow creature that attacks him. The novel follows Ged's journey as he seeks to be free of the creature."
Incidentally, the Earthsea books are well worth reading.
True, the schematics created by LLMs are pretty way off for many things.
However, even without asking for the schematics, and just asking it to say in text form what should be connected to what, it still manages to mess things up. The trouble is, it's partially correct so the wiring diagram of a 555 must be in its network somewhere, but I'm guessing it's polluted with wiring diagrams from other stuff, or alternate 555 configurations, so it gets to a certain point and then the probabilities of the pollutants take over. Even though it gives a confident step-by-step guide for wiring things up, you get half way through and think 'hey, that pin shouldn't go to that one', and notice that other things don't make sense either.
It also seems to be the case that when asking for the circuit, it can reproduce the standard databook formulae for the frequency and mark:space ratio from the passive components, but then fails to use that correctly to compute suitable values - again I'm guessing that because the timing characteristics involve multiplying resistor and capacitor values together, there are multiple ways of getting to the frequency you want (e.g. scale up the resistor values by the same amount you scale down the capacitor value), there are multiple versions of the circuit that it's been trained on, but even for the same frequency, the original authors have chosen different values, and for a system that works on probability those different values may all be similarly probabilistic pathways and it ends up with a mish-mash of components that don't work together.
"you would continue to think about efficiency and better coding practices throughout life."
I used to be in the situation where I had to think about efficiency. My first computer had 5K of program space, and I've programmed EPROMs and PICs down to the last byte.
Efficiency doesn't always yield better coding practices - when you've got a hard limit of the end of your ROM space sometimes you have to take short cuts to make things fit, and from a coding practice perspective they didn't always look nice!
To cover a wide variety of coding problems you need a lot of sample code to build your patterns from.
You could cover at least some of the 'power of 10' rules by filtering the learning examples to be just those that follow the rules (e.g. ensuring no training data includes gotos, no compiler directives other than #include, #define, very restricted use of pointers, etc.) so it doesn't 'know' code outside of the power of 10 coding subset. However, I doubt there is enough power of 10 code in the wild to cover the spectrum of questions likely to be asked of it.
This would require a lot of manual work to 1. curate any 'power of 10' code that does exist, and 2. rewrite and validate a significant amount of non-compliant examples that do exist to provide a wide enough training base of code.
While that might go some way to getting it to produce power of 10 compliant code, by virtue of it not having patterns for certain non-compliant coding structures, I doubt it would be enough to ensure an LLM produced code that completely followed the rules, and it would be a mammoth human undertaking to create the training data, rather than lifting non-compliant code from the many sources where that already exists.