Re: I give it 30 years...
It doesn't require much fuel to get in a state where you'll eventually fall. It does take quite a lot of fuel to fall quickly from a stable orbit.
2079 publicly visible posts • joined 24 Nov 2007
>This raises the question: why recruit somebody if an AI can assist lawyers as a virtual paralegal, help academics with their work, or do something as mundane as booking travel?
Can the 'AI' do that? To the point where you no longer need the assistant?
We'll see, I guess. So far, though, I don't think so. In any job that requires some exactness, LLM output needs to be very carefully double-checked, and it will sometimes be wrong in subtle ways. It takes longer to validate it than to do it myself.
I respect the author's point of view, but the proposed solution looks convoluted and pointless to me.
First of all, there really is no legal basis for this. A derivative work of an unlicensed work does not become public domain; it just becomes illegal. You would need new legislation in order to do this. You'd need to lobby for it, and hope another lobby doesn't get loopholes into it, the whole sausage-making. It's a messy, lengthy process, even when it works.
And then, what do you get? Training these models is staggeringly expensive. If you remove the ability to monetize them, nobody will ever make another one. There is no scenario where you get an open-source ecosystem of LLMs - not out of this proposal, I mean; there may be ways, but this isn't one. At best, they'll start making them out of licensed material, which is fine, but does not result in open-source models.
So this proposal amounts to asking for new legislation, just to get a few models for free and then either put LLM companies out of business or force them to behave in order to keep subsequent models proprietary.
I mean, it's okay, but why don't we just declare that LLMs are derivative works of their entire training set? That doesn't require any new legislation, it's a decent argument within existing copyright law, and the end result is that current models get deleted, after which LLM companies either go out of business or start to license training sets.
So the only difference in result is that we don't get a few models for free - but we only have to convince judges, not politicians. Also, are we sure we want to keep those models anyway? Part of the problem is that they are full of PII. There is no fix for that.
I'm not sure I would trust this not to be a trojan horse. Having said that, though: some competition to the crapfest that is Bluetooth had to appear eventually. If it can actually reliably find an enabled device one meter away, and it can do 2-way audio (voice+mic) at 21st century quality, and it doesn't have a different pairing procedure for every single device, it's already better than Bluetooth.
Yeah, they are flying at mach stupid. And most other things in orbit don't have any armor, because of weight. The quote I've always heard is that flecks of paint are still dangerous, and I see no reason to disbelieve it.
All of that said, removing the big stuff is a good idea. A big piece of junk is just a million small pieces of junk that haven't been hit yet. Taking it down while it's still in one piece is way, way easier.
I'm going to take a stab at answering, from the towering height of knowing nothing at all.
>So, what's the security on this ? Are you going to be broadcasting your brainwaves for anyone to read them ?
I'm fairly sure these are wired. I guess the ends of the printed wires are going to be taped to an actual wire, which goes into a recorder. I can't see wanting to add radio noise to an already-noisy signal.
>I have to wash my hair every morning. What's the impact of that going to be ?
You can't wash your hair while wearing these. But, good news! It's okay, because you have to shave nearly-bald anyway. Also, it's just for 24 hours. The exam, that is. Fixing the shaved-nearly-bald thing is going to take longer.
>And how long do you need to read an EEG anyway ?
I don't know, but if this works like many other long-duration medical scans, the point is not so much how long it takes to read the EEG, but rather the ability to capture unpredictable and short-lasting events, that could happen at any moment during the day. For example, there are heart exams that involve taking your pulse and pressure continuously for a day, even though those are trivial to measure at any given time.
>I'm thinking of the early days of computing and punched card machines.
But there are critical differences. Early computing devices had lots of problems, but they were well-understood problems (by trained personell). They sometimes did not have any technical solution, but they always had a clear theorical solution. When you're in that situation, the technical solutions usually show up soon enough, and they did.
The main problems with LLMs are not well-understood even by specialists, and they don't have any clear theorical solution. When you're in that situation, you'll be stuck there until someone comes up with something new, which might happen at any time or never.
I don't know how much this can be applied to anyone else, but, for me at least, a significant deterrent is all the news about major improvements to EVs that are in the lab, or hitting production lines, or trickling down from high-end models. Things like solid state batteries, otherwise better batteries, working V2G, or even the plausibility of lower prices as scale picks up. Not all of this is going to pan out, but some of it will. And there's the infrastructure; the number of charging points might grow faster or slower, but it's not going to go down.
All of this tells me, if you like the idea of an EV - hold out for a while longer, and you might get a substantially better one. Or, same thing from another point of view, get an EV right now and you might see its resale value plummet as the next generation is much better and/or relatively cheaper.
ICEs, on the other hand, are as good as they're going to get.
A great excuse! Think of all the crimes it could be applied to. Can I electrocute you, but just for a moment? Is it okay for someone to cop a feel, if he's real quick about it? How about detonating a large firecracker in front of your house without warning, surely it's not a problem?
Agree. I feel that the whole point of Asimov's robot stories is that you cannot have true intelligence AND reliable behavior boundaries. Those two requirements are directly at odds, even though this is non-obvious.
This is something the people who try to design LLM railguards should think about.
I've just read the article and I don't know the law either. That said, from what I understand, a "political ad" for this law is anything that relates to an upcoming election or referendum. So a pro-abortion ad would be covered by the law if there was a referendum on abortion coming up.
I've no idea whether that would extend to pro-abortion ads if there was no referendum, but there was an election where abortion was considered a major issue. That ambiguity could actually be resolved in the law for what I know, but if it isn't it might be one of Google's reasons.
I have the feeling that a big part of Google's problem is that, even if there was no ambiguity, there is no automated way to figure out whether an ad is covered or not. There is no central database of all elections and referendums. Even if there was, ads do not come pre-tagged, and so-called AIs are not reliable enough to identify something that could land you in court if misidentified.
Google's business model relies strictly on full automation; they can't afford having a human vet ads.
All things considered, I'm fine with the end result. Most political ads give zero or negative information anyway.
>Copyright holders can't make it illegal for you to read and digest a book, or even to memorise it down to the last comma. Why should they be allowed to stop an LLM doing the same?
Nobody is suing a LLM.
All of the lawsuits are against the LLM's developers.
LLM developers are not reading and memorizing books. They are feeding books to a computer algorithm, running it, and commercially exploiting the output.
During this process, let me reiterate, they do not read books, nor do they take any action that remotely looks like reading books. They mostly don't even know which books they are processing,
The LLM does do something which looks like reading books, but that is irrelevant because, again let me reiterate, nobody is suing the LLM.
On the other hand, doing computer processing of something under license and then selling the output is very unlikely to be covered by the license, and if it were, it would be in the list of things you are explicitly forbidden from doing.
I really can't put it any simpler than this.
Also, since this is something that comes up, just because something is freely available to read and download, it is not in the public domain. It just means that there aren't technical restrictions, which makes the legal restrictions hard to enforce. But they still exist.
I haven't read the entire paper, but from reading the article I get the impression that this is yet another study that does not attempt to propose any solution.
The reason being, it's an uncomfortable truth that those jobs cannot be saved as they are. If you restrict the use of automation and/or force companies not to fire redundant workers, then the companies will be outcompeted by wherever automation is unrestricted, and eventually go bust, at which point the jobs will be lost anyway.
Ultimately, the big-picture problem isn't so much automation eliminating jobs; it's that the increased profits from improved productivity don't get efficiently reinvested in the economy, but instead become increasingly concentrated in a small number of actors. Part of this is direct, e.g. lay off people and pocket the profits, an even bigger part is a consequence of how the financial sector operates, e.g. when you can reliably get much better returns from the stock market than from setting up a new production line, that is insane.
If you fix that problem, you'll find that new jobs get created, and you'll have enough spare money to set up retraining programs and unemployment benefits so that displaced workers can switch to those jobs without trauma. The only ones to lose will be the ultra-rich (not even the rich, or very rich).
>Cutting down trees that eat CO2 to reduce CO2 - WTF???
"Trees eat CO2" is a gross oversimplification. It's true enough for slogans and for kids, but not for serious talk. I think that our current tendency to get bored after reading more than four words in a row is a pretty large contributor to society's ills. Unfortunately, reality has no obligation to be describable by snappy soundbites alone.
In this specific case, cutting trees down to build something more durable than those trees (and no, not all trees live for centuries), and then replanting them, will reduce atmospheric CO2 more than leaving them alone. High school education and about thirty seconds of thinking should be enough to understand why.
>anyone else think 60-70 years as a lifetime for a building isn't exactly ambitious?
"Not exactly ambitious" is pretty much how the whole world is currently operating. "Long term" is, what, five years? It's like everyone believes to be on the initial incline towards the singularity or the apocalypse or both, and there's no point planning. I don't think this zeitgeist is sustainable in the (real) long term.
Spot on. The article's author says "And true sustainable parity will depend on access to the global markets with competitive cutting-edge products. There's no sign of that." and showing he's missing this point entirely.
India + Russia + China + chunks of SE Asia, Africa and South America is a global market. And saying there's no sign of "competitive cutting-edge products" is myopic. There are all the signs, just not for next month, not next fiscal term, maybe not for a few years - but, guess what, there's life beyond the next few years, and that life looks a lot like Chinese silicon reaching the cutting-edge.
As for why US politicians went down this route - I'd say ignorance (history shows pretty clearly how this sort of thing works), short-termism (look tough now, it'll be someone else's problem later), but there's a lot of racism in there. I read "China can never reach parity with Intel/AMD" and I don't know how else to interpret it. Of course they can, they just needed a strong motivation, and we just gave it to them.
I wonder whether widespread, institutional, systematic lying really is a relatively recent phenomenon, or whether it's just my perception as I'm getting older and my glasses start to become rose-tinted.
I have the feeling that it used to be that politicians, CEOs and other movers and shakers told half-truths all the time, but outright direct lies were much less common, and tended to carry at least some consequences when found out.
Nowadays, it appears that if a big name says that the sky is green, people aligned with him will happily declare the verdancy of the heavens in a chorus, and most of the unaligned will largely shrug and not hold it against him too much.
I hope it's just that I'm getting older, and imagining better days, as older people do. Because if I'm right, then I have to wonder how long a society can function with rampant denial of reality.
>These brains don't need to be giant, a human brain has the neural capacity to outperform any supercomputer, it just can't be supplied1 with the required level of oxygen & nutrients to operate at that level for more than a second or two.
I don't think that's true. Brain energy consumption can be measured, e.g. with a MRI; I think that if that theory was good, it would've been tested by now. I'll happily look at any sources confirming this, though.
Also, I have been in that kind of event once; it's a weird feeling, hard to describe, but it felt more like extremely intense focus, than an acceleration. Like all background thoughts were killed. After the critical seconds, they came back with a vengeance. It didn't feel like more, if anything it felt like less.
Code is a resource, but creating code is a service. A very scarce service. Getting paid for that is not evil. Attempting to get to use a service against the will of the service provider is usually evil.
Also, code is a resource to which the creator has full moral rights. That means that I, and I alone, get to decide whether adding artificial scarcity to code I created is evil or not. You do not get to determine that, and I do not have to justify my decision.
Philosophy, I know, but since you brought the "evil" word into the conversation, you get the philosophy. It's not always intuitive, but sometimes truth is complex and we just have to deal with that.
It's... somewhat better. I wouldn't say it's "much" better.
The difference is that if you dig up a fossil and burn it, you are adding carbon to the carbon cycle that was not there before. Not on a time scale that makes any difference to anything. It won't go back into the ground.
If you burn wood, you are adding carbon to the atmosphere, but it's all carbon that was in the cycle already anyway. The plant would eventually have died and decomposed, on a time scale that varies a lot but is generally in the order of decades.
It's still bad, but it's a different kind of bad. By burning fossils, you are inflicting permanent damage; by burning wood you are inflicting temporary damage. It's a pretty long "temporary" and it has exceptions, but still.
That's only from the CO2 point of view, though.
There's the we-need-land-to-grow-food point of view, which, as you rightfully note, is a whole lot more urgent and IMHO should be sufficient to relegate biofuels to the dustbin of bad ideas.
Law enforcement is usually best employed as a last-tier solution to any given problem. It's good at dealing with point events, but it's extremely expensive.
Because of that, it's smart to deploy other measures to reduce issues in a statistical sense. Use incentives and disincentives to reduce motives. That way, the number of events that has to be dealt with by law enforcement can be minimized.
That is efficient. For some problems, it's the only way to realistically keep them under control. That is typical of crimes where victims and perpetrators are in collusion, such as contraband, drugs and, yes, ransomware.
Expecting to solve all problems via law enforcement alone is like allowing everyone to just dump trash in the street, and then having the street cleaning staff deal with it. Sure, it works in theory, if you think it's sane to spend the entire town budget on the cleaning staff.
You're suggesting a technical solution to a social problem. It won't work.
More details: your proposal could be implemented in a technical sense, but it can't work if the web server doesn't comply. And the web server is under the offenders' control.
How are you going to force the offender web servers to comply? There are no technical means to do so.
Are you going to suggest using legislation? And enforcement? Yes? How should that work, and how are you going to get it through your nearest parliament? Okay, now you're thinking about the real problem. Get back to me when you've got a solution.
The technical reasons for cybersecurity being a mess are very nearly irrelevant, when compared to the economical reasons.
You can bet that if cybersecurity incidents routinely resulted in lawsuits and/or criminal charges against the vendor, buffer overruns would be made extinct PDQ some way or another.
That is exactly how you end up, after a few years of doing this, with unmaintainable messes that are easier to throw away and redo from scratch than to fix. That's only efficient if all you care about is next release and next fiscal term. If the objective is to make a product that will keep making lots of money for decades while being cheap to maintain, it's an extremely inefficient approach.
Good points. Unfortunately, there's a good number of people who, when encountering the word "nuclear", just stop reading and start raging. There's also a good number of people who exhibit a similar behavior on reading the word "Microsoft". Having both words in the same article is pretty much a guarantee that the comments section will be useless.
When I'm feeling optimist, I hope that first big tech builds a crapload of carbon-neutral power generation for AI, and then the AI bubble bursts for good, leaving said generation available for useful stuff.
I'd say it's because of the aura Trump built for himself. On one hand, he generally projects strong negative messages: he tends to rage at things he dislikes much more than cheer at things he likes. When he does cheer at something, it's usually just a flip-side, cheering because something he dislikes is getting raged at. At the same time, he's extremely self-centered; it's all about him.
At this point, it's difficult to think about Trump without feeling outraged. If you're for him, you're outraged at the things he points at; if you're against him, you're outraged at him. And he's right in the middle of this tornado of negative discourse.
It's not that surprising that people who are slightly unhinged from reality might easily get swept up by that vortex and drawn towards the center. I don't think Trump tends to attract as much attention from the cool, calm and collected of either side of the political spectrum, as he does from the rabid hordes. And that is definitely by design; however, it's clearly not a plan without risks.