> the project lead, Rudra B. Rudra, has had less time to dedicate to the work recently. As a result, the team could not ship a stable 25.10 release this October.
That may not be the biggest deal, because Ubuntu 25.10 itself is not going to be stable, thanks to switching from GNU coreutils to the uutils Rust rewrite, with Ubuntu 25.10 being a "see what breaks and fix it" canary before the long-term support 26.04 release in 6 months.
what's super weird to me is how people seem to look at LLM output and see:
"oh look it can think! but then it fails sometimes! how strange, we need to fix the bug that makes the thinking no workie"
instead of:
"oh, this is really weird. Its like a crazy advanced pattern recognition and completion engine that works better than I ever imagined such a thing could. But, it also clearly isn't _thinking_, so it seems like we are perhaps exactly as far from thinking machines as we were before LLMs"
>It would be possible to be infinitely arbitrary to the point of “AGI” never being reachable by some yard sticks while still performing most viable labor.
"Most viable labor" involves getting things from one place to another, and that's not even the hard part of it.
In any case, any sane definition of general AI would entail things that people can generally do.
DI (generally) tends to point more towards constructing objects or systems. This would be a bit closer to a functional equivalent of the OO "template method" pattern: https://en.wikipedia.org/wiki/Template_method_pattern
I don't think it's incompetence, I think it's planned, merciless strategy. It costs nearly nothing to fire a ton of people, so why not hire a bunch with free money and dump them as soon as the free money disappears
it could be really useful for cases where youre repeatedly processing similar JSON structure like in case of analytical events but any plans for language bindings beyond the current implementation?
For problems with a very large number of solutions this quickly becomes inefficient. The blocking clauses will bog down the solver hard and waste tons of memory.
A more clever approach is to emulate depth-first search using a stack of assumption literals. The solver still retains learned conflict clauses so it's more efficient than naive DPLL.
They have a definition actually) “When AI generates $100 billion in profits” it will be considered an AGI. This term was defined in their previous partnership, not sure if it's still holds after the restructuring.
Death caused by automated vehicle is not the controversial aspect, the difficulty comes when there is a choice that the vehicle must make between possible outcomes:
* avoid crashing into pedestrian(s) but kill occupant(s)
* crash into pedestrian(s) to save occupant(s)
Real life trolley problem at work and programmed by someone somewhere
I've been talking about my health problems to unaccountable bullshit machines my whole life and nobody ever seemed to think it was a problem. I talked to about a dozen useless bullshit machines before I found one that could diagnose me with narcolepsy. Years later out of curiosity I asked ChatGPT and it nailed the diagnosis.
Of course not, then we'd never hear the end of it :)
I was just informing that the company always had AGI as a goal, even when they were doing the small Gym prototypes and all of that stuff that made the (tech) news before GPT was a thing.
I prefer Atos + a solid open source solution to "an own IT department that will ditch the battle tested open source solution because XYZ" and then 6 months later bugs rain from the sky with users' data searchable in google.
Opening up a whole department requires skills. If you don't have such skills, please hire the "parasite". I prefer that. At least they provide a service, overpaid, ok, but they have at least some knowledge in the business.
Aren't all frontier models already able to use all these languages? Support for specific languages doesn't need to be built in, LLMs support all languages because they are trained on multilingual data.
I don't think anybody accepts school shootings, and anybody accusing half of the population of "accepting" this obvious problem is likely making a bad faith argument attempting to paint their political opposition in a bad light.
It's a bit much to blame the user for this when the product is crafted specifically to give the impression of being magical. Not to mention the marketing and media.
Is the company valued at $500 billion or is the sum of the digital assets they’ve collateralised worth $500 billion?
Because if you buy the tokens you presumably do not own the company. And if you buy the company you hopefully don’t own the tokens - nor the assets that back the tokens.
That may not be the biggest deal, because Ubuntu 25.10 itself is not going to be stable, thanks to switching from GNU coreutils to the uutils Rust rewrite, with Ubuntu 25.10 being a "see what breaks and fix it" canary before the long-term support 26.04 release in 6 months.