Re: Hmm
Trump and the clowns who enable him wouldn't know freedom of speech if it jumped up and bit their dicks off.
349 publicly visible posts • joined 29 Jun 2010
I think you've misunderstood the thrust of my argument. C++ needs to be defended against technically unsound attacks because of the risk that these will lead to code migrations or replacements that are likely to introduce new bugs, and because reams of code are written in it. This is a risk that has been understated.
[quote]
If you have to "defend" a language that means it's probably on the way out already. If you feel a language is subject to "attacks" you haven't understood how computer languages evolve in the field of software development.
[/quote]
That's assuming that the attacks make technical sense. As others have pointed out, the urging to move to memory-safe languages offers false promises. Rust can be made to operate unsafely, and the risk of migrating legacy code seems to have been understated. I don't agree that this is an existential threat. C++ is not going away. But this is an annoying reputational one.
I've read this MIT Technology Review [https://www.technologyreview.com/2025/01/24/1110526/china-deepseek-top-ai-despite-sanctions/] on DeepSeek and it seems to say that sanctions and limitations directly led to innovations that increased computational efficiency in training models. This suggests that AI in the west is suffering from serious bloat. Moore's law is one thing, but as computer hardware has become exponentially more powerful, this has resulted in massively bloated operating systems and application software because the need to program efficiently has fallen away. There was a time when the application designer had to fight for and justify every byte in memory. Imagine what amazing things could be achieved if those sort of engineering limitations came back!
The recommendations that the tool gives you are simply that. Recommendations tailored for your specific case. You can decide to implement a different policy, as long as it's reasonable. The following year, if that implementation has not proven effective (i.e. you failed to quickly find and remove some proscribed content) then it should probably be amended. That incident does not mean you've broken the law, but ideally it would mean that you've already changed the processes in place.
OfCom would have to accept your risk assessment and description of the processes in place as well as the review that you carry out every year. If they do, you're fine. If they don't, you need to do it again. If you don't submit the risk assessment and other requirements, then you are non-compliant.
As I said, compliance is about ensuring that the sort of content that is for the most part already considered illegal is recognised and that there are suitable processes for dealing with it. If you run such a service and you don't have a risk appropriate process or you fail to comply with an enforcement notice, you are liable. Proscribed content making its way onto your service does not constitute a failure to comply. It is to be expected. How you deal with it is what matters. The penalty everyone mentions is the maximum available and would surely be reserved for gross and wilful noncompliance. How about we all just relax a little bit. It hasn't even started yet and some of you are predicting the end of days.
We're tech professional mostly. Smart people, I would have thought. We should take an evidence based approach when talking about this Act and how it is supposed to work. The Online Safety Act has its problem, but nothing like on the scale or scope of what some of you are suggesting. There are some publications explaining what will happen and how it will be enforced. Please read them. Here's one from gov.uk. I've linked to the heading "How the Act will be enforced."
https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer#how-the-act-will-be-enforced
It states that criminal charges only apply where information requests are not followed and for non-compliance with enforcement notices.
OfCom provide a tool to determine whether a service is caught by the Act. The start page for that is here, and it links to some other resources:
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/check/
You answer a few questions, and if you find that the Act applies (for most of what we are talking about, it does), you can click a button "Check how to comply", which brings up this:
https://www.ofcom.org.uk/Online-Safety/check-how-to-comply/
Four steps:
"Step 1 - understand the harms
Step 1 will help you to know which kinds of illegal content to assess, and to make accurate judgments about your risks.
Step 2 - assess the risk of harm
Step 2 will help you use evidence to assess and assign a risk level to: the risk of harm to users encountering each of the 17 kinds of priority illegal content and other illegal content; and also the risk of harm of your user-to-user service being used for the commission or facilitation of a priority offence.
Step 3 – decide measures, implement and record
Step 3 will help you identify any relevant measures to implement to address risk, record any measures you have taken, and make a record of your assessment.
Step 4 - report, review and update
Step 4 will help you understand how to keep your risk assessment up to date, and put in place appropriate steps to review your assessment.
Based on your answers to questions asked within the tool, we will provide you with compliance recommendations for your service. It will be of most use to small and medium sized businesses but could be useful to any in scope service provider. "
The tool mentioned above is still being developed. It will provide recommendations. Following those recommendations would be a good way of ensuring compliance, but the recommendations are not law. I can imagine a recommendation that communities develop and publish a sensible set of rules and that those rules are enforced.
For the vast majority of online forums to which the Act applies, once you understand what content to look out for, step 2 will result in an assessment of minimal risk, because they are already operating in a manner that catches this type of content and deals with it quickly or there has never been such an incident in years of operation. That cycling forum mentioned in the article would be one such. In the latter case, step 3 will involve working out how to ensure that such incidents are caught and dealt with. For example "We will moderate our channels by providing a means to flag inappropriate content as well as proactively tackle inappropriate content that moderators discover themselves." If OfCom accepts the risk assessment and the measures, then you're good. There might still be some incidents which slip through the cracks. This is not a violation. It just means that the risk assessment probably underestimated the risk. If OfCom order removal and you don't comply, that would be a violation. There would have to be a process of appeal because a regulator is in a quasi-judicial rule. Step 4 is not unlike what we're having to do every year for GDPR.
I think there are some legitimate concerns around things like hate speech, and there had better be clear advice on that. Often people report things for hate speech which do not qualify. It could come down to an enforcement notice, and that would need to be complied with. Fines and, at the most extreme, criminal prosecution are reserved for responsible parties who do not carry out the duties imposed by the Act. If you do not perform and submit a risk assessment, you could be fined, but it won't be £18 million pounds or anything like that. Complying with the Act means carrying out steps 1 through 4, responding to information requests, and complying with enforcement notices.
Some of what the Act is attempting to do may seem pointless (age verification, etc), and perhaps it is. But I really don't think this is armageddon for service providers. If you disagree with any of what I've laid out, then I welcome your thoughtful and polite reply. /ducks
I'm sorry, but this is rubbish. We'll see. Give it a year. I'm sure you will be proven wrong. The Act will catch actors who are not being conscientious. It is attempting to ensure that everyone understands what sort of content is proscribed, most of which is common sense, and to assess the risk of such content a) getting onto their platforms and b) not being swiftly removed and dealt with in a compliant manner. In so doing, they will look at whether they have adequate procedures in place. The people who are worried about it are already doing the things they are meant to be doing.
On the other side of the Atlantic, major social media providers are shrugging off their responsibilities and just saying it's the wild west. Maybe Facebook and X.com will stop doing business in Britain. I honestly could not give a shit.
There is so much FUD around this. It's not minimum fine. It's up to £18 million or 10 percent of qualifying revenue. Read this explainer: https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer#how-the-act-will-be-enforced
I just can't see good faith errors being punished. Wilful failure to engage with OfCom is of course going to incur penalties.
I will begin by admitting that I have not looked very closely at the requirements that come into effect in March. But my limited reading of the gov.uk's explanation of the Act and how it is to be enforced suggests to me that a small forum that is actively moderated does not have much to worry about. Inappropriate content will be quickly removed and appropriate actions taken. Adequate controls are therefore already in place, and it is just a matter of noting this. It might seem a bit arduous in the first instance, but that would be a one-off cost, as the conditions year on year would remain stable until and unless the forum becomes much larger. GDPR creates a similar nuisance for small data controllers, but with the initial assessment exercise out of the way, it's become mainly a yearly box ticking exercise.
The article points to a four year old post in a small forum as an example of how this Act adversely affects small online communities. They're still freaking out about it and are talking about moving to Discord, but one wonders whether this is an overreaction. The fact is that there is this law, and forum providers need to comply with it. The degree to which one must comply is proportionate to the risk of harm. In theory, a risk assessment would show a negligible risk of harm on an efficiently moderated forum which has been operating for many years and has never seen an example of such harm, or has demonstrated swift removal of potentially harmful content. In practice, I hope this risk assessment does not create an undue burden. I can practically guarantee that OfCom are not going to over-prosecute this because they do not have the resources.
We use Atlassian because having tried numerous CMS systems over the years, Confluence is the only one that has clicked with authors, who just "get it" and actually like writing documentation there. Jira has also revolutionised our service management in that people actually communicate with each other. Fancy that. It's all down to the alerting, which just works. Jira's workflow engine is actually very flexible, so we've taken to using Jira software projects for data capture and other things. I'm writing and publishing custom Forge macros for Confluence and it's actually pretty cool. All in all, despite some gripes, we're happy with it.
Piss off with your thumbs down. I'll bet money an AI will be able to translate most C code into the language of your choice within the next 3-5 years. It wouldn't be good enough for production, but it would save an enormous amount of time.
...for aptly demonstrating how dependent our IT infrastructures are on trusted vendors, and how vulnerable they are to wild defects. When that trust is misplaced, as was the case here, really bad results can occur. It's something like "who watches the watchmen". The QA processes for an entrusted security vendor need to be far more robust than this episode suggests. I suppose it could have been a lot worse.
No, THIS is nonsense. It will never ever fly. It goes against natural justice to turn victims of extortion into criminals for acting out of fear. I do not want to live in a society that deems this acceptable. Acquiescing to threats by simply handing over money can never be a crime.
Plus, we're actually talking about property, whether it's the property of the company or the property of the company's customers. The fact that it's digital doesn't make any difference. As a legal person, the company has rights with regard to property. It will also be legally obligated to safeguard its customers' personal data as far as practical. You have not thought it through.
You cannot force victims to rely on law enforcement. The idea that it's illegal to pay a ransom "unless authorised as part of a credible police sting" (as I've seen suggested) is laughable.
"Paying a ransom (unless authorised as part of a credible police sting) needs to be a criminal offence"
I think that's taking it too far. You might as well say that in relation to any extortion attempt. Paying the mafia to take out the garbage in your office building should be a criminal offence. Yeah, well failure to do so will get your legs broken. This is the nature of duress.
Hang on. Let me Google that for you. It's licensed under its own open source license called the PSF (Python Software Foundation) LICENSE AGREEMENT, the terms of which you are free to read. You can always submit patches to the maintainers if there is source code available. They are under no obligation to merge them. There is of course source code available, otherwise the project could not really even pretend to be open source.
Not sure what point you were trying to make.
Downvote me all you like, but that quote is incredibly insulting to the millions of people all over the world who are involved in Python one way or another. Not just Python, but any language effort that is not statically typed. It manages to be both woefully ignorant and elitist at the same time. This is like a debate at a student union. Plenty of old time programmers like myself have found our way to Python and we use it responsibly. The problem that this article mentions turns out to have nothing at all to do with Python. It was a locale issue. The Python 2 program, the language for which is four years out of support, still actually works. That's saying something.
Statically typed languages are great. I like C#, and dotnet runs well now on Linux. We mostly deploy apps to GKE here. We have some containerized stuff that was migrated from Windows server. It's fantastic that we could do that. And TypeScript is a good improvement over JavaScript. But mostly it's Python for the backend, and there is really no problem at all with that. Pytest makes it fun to write unit tests. They get run by the CI pipeline and ruff checks the formatting. JupypterLab is wonderful for prototyping. Application development has been steadily moving in the direction of cloud native for the past decade or so, I want to say. Desktop applications are on the way out. And Python is among the tools of our trade, so just deal with it.
"The reason Python is "popular" is because it's the first language you learn on contemporary CS courses. A few years ago it was Java for the same reason. JavaScript is also very popular, but it just seems like Stockholm Syndrome for programmers where they've just spent too long unaware of the alternatives and their relative strengths or weaknesses."
No, I think you've got that backwards. You learn about it because it's popular and good, and you're likely to be using it professionally. I didn't learn about it in school. I'm 30 years out of education and Python has become my favourite language for most work. One of the reasons it has taken off is that it seems to have been very well suited for data science, and as a result the data science support has blossomed. What you say about scientists is true. I work with a lot of data scientists and they are not the best coders. But that's nothing against the language. You think this is bad? Try PHP.
On testing, you're off the mark there. Pytest is the best testing framework I've ever seen. No contest. And if you take a test driven approach, then you're writing tests at the same time or before you write your code. I'm a senior software engineer with over 25 years experience on all sorts of languages and platforms. I love Python and FastAPI for anything on the server and I love React with modern JavaScript for front end work. I don't know what you're talking about with testing the language rather than the logic. That has not been my experience, but again, if you approach unit testing and integration testing properly then that's not an issue. Every language has features that you're probably better off not exploiting. It comes down to experience.
As for OOP, it does have its issues and it's helpful to recognise them. Functional programming has a lot of inherent advantages. Interestingly, Python alleviates one of the problems with OOP by supporting multiple inheritance.
I can see that it wasn't clear, because there is an AC replying to another AC, but I was directing my comments to the original AC. I may have replied to the wrong one, but it should have been clear anyway, or so I would have thought. It's only months later (approaching years) that I've see this, so apologies.
He has certainly of late made some bad business decisions, but the business arrangements I'm sure are in order. There has never been a recorded instance of a court piercing the corporate veil of a publicly traded company, but if it's going to happen anywhere I suppose it will probably happen in California.
I just don't see it. You don't get to be the world's richest man (off and on) by being stupid with your business arrangements. Piercing the corporate veil is very difficult with publicly traded companies. In any case, there needs to be a demonstration of egregious actions taken by a shareholder or shareholders, usually an actual fraud. He'll just take the company into bankruptcy, escaping personal liability, and the former employees to whom he owes three months severance would have to join the queue. It wouldn't at all surprise me if all of this past year was just a deliberate plot to ruin Twitter because of some personal vendetta, but it would be really hard to prove.
You're wrong about the valuation, but they'll never get at his personal assets. That simply doesn't happen. I can't even think of a motion that you're alluding to for the owner(s) or directors of a corporation to be personally financially liable for its failure outside of malfeasance. They can be held criminally liable.