There isn't yet a game that involves all the players in one huge level, without shards, but there might be eventually. Current game engines don't support levels with that many players simultaneously. There is an interview with Neal Stephenson and Tim Sweeney on the Metaverse where Sweeney says supporting massive multiplayer is what he plans for Unreal Engine 6: https://www.matthewball.co/all/sweeneystephenson
> So one of the big efforts that we're making for Unreal Engine 6 is improving the networking model, where we both have servers supporting lots of players, but also the ability to seamlessly move players between servers and to enable all the servers in a data center or in multiple data centers, to talk to each other and coordinate a simulation of the scale of millions or in the future, perhaps even a billion concurrent players. That's got to be one of the goals of the technology. Otherwise, many genres of games just can never exist because the technology isn't there to support them. And further, we've seen massively multiplayer online games that have built parts of this kind of server technology. They've done it by imposing enormous costs on every programmer who writes code for the system. As a programmer you would write your code twice, one version for doing the thing locally when the player's on your server and another for negotiating across the network when the player's on another server. Every interaction in the game devolves into this complicated networking protocol every programmer has to make work. And when they have any bugs, you see item duplication bugs and cheating and all kinds of exploits. Our aim is to build a networking model that retains the really simple Verse programming model that we have in Fortnite today using technology that was made practical in the early 2000's by Simon Marlow, Simon Peyton Jones and others called Software Transactional Memory.
"Gulf of North America" would make more sense, because most people regard (just) "America" as synonymous with "USA". Though that name would perhaps be more appropriate for the Hudson Bay, which is farther north.
That's not true. I live in Europe(Germany) and I as well as everyone I know personally says "America" when we mean "the USA" and mean "the USA" when we say "America".
I visited Germany a few years ago and I can confirm that they called me “The American” and referred to the US as America. My colleagues explanation was “There’s Romanians, from Romania. There’s Hungarians from Hungary. There’s Americans from America.” I laughed a bit when he said it.
That's also why in The Matrix (1999) the main character takes the red pill (facing grim reality) rather than the blue pill (forgetting about grim reality and going back to a happy illusion).
Aye I always thought the character of Cypher was tragic as well, his reality sucked so much that he'd consciously go back and live a lie he doesn't remember and then forget he made that choice.
Yeah. It's "pick your poison". If your English sounds broken, people will think poorly of your text. And if it sounds like LLM speak, they won't like it either. Not much you can do. (In a limited time frame.)
Lately I have more appreciation for broken English and short, to the point sentences than the 20 paragraph AI bullet point lists with 'proper' formatting.
Maybe someone will build an AI model that's succinct and to the point someday. Then I might appreciate the use a little more.
This. AI translations are so accessible now that if you’re going to submit machine-translations, you may as well just write in your native language and let the reader machine translate. That’s at least accurately representing the amount of effort you put in.
I will also take a janky script for a game hand-translated by an ESL indie dev over the ChatGPT House Style 99 times out of 100 if the result is even mostly comprehensible.
It's extraordinarily hit or miss. I've tried giving instructions to be concise, to only give high level answers, to not include breakdowns or examples or step-by-step instructions unless explicitly requested, and yet "What are my options for running a function whenever a variable changes in C#?" invariably results in a bloated list with examples and step-by-step instructions.
The only thing that changed in all of my experimentation with various saved instruction was that sometimes it prepended its bloated examples with "here's a short, concise example:".
LLM are pretty good to fix documents in exactly the way you want. At the very least, you can ask it to fix typos, grammar errors, without changing the tone, structure and content.
> The first explanation is that text tokens are discrete while image tokens are continuous. Each model has a finite number of text tokens - say, around 50,000. Each of those tokens corresponds to an embedding of, say, 1000 floating-point numbers. Text tokens thus only occupy a scattering of single points in the space of all possible embeddings. By contrast, the embedding of an image token can be sequence of those 1000 numbers. So an image token can be far more expressive than a series of text tokens.
Does someone understand the difference he is pointing at?
In fact most digital goods that are sold in large numbers via download, are, as far as I'm aware, sold with some form of DRM. Like films and video games. Otherwise piracy would be just too easy. MP3s don't have DRMs, and are still sold (e.g. by Amazon), but those now seem to be largely replaced by music subscription services.
And this might be a reaction to the fact that music piracy is quite easy; if it wasn't, perhaps there would be no Spotify where you get basically All The Music in existence for peanuts. (Note that no equivalent subscription service exists with regards to movies or games: Netflix and Xbox Game Pass have only a limited selection of content included in their subscription.)
> Not to mention the functions are also translated to the other language.
This makes a lot of sense when you recognize that Excel formulas, unlike proper programming languages, aren't necessarily written by people with a sufficient grasp of the English language, especially when it comes to more abstract mathematical concepts, which aren't taught in secondary English language classes at school, but it in their native language mathematics classes.
Valid question, as they already have a partnership with OpenAI to use ChatGPT in Siri. I personally use GPT for illustrations and Nano Banana for photo edits (Midjourney for realistic photos).
As an aside, perhaps they're using GPT/Codex for coding. Did anyone else notice the use of emojis and → in their code?
Someone who works in AI told me they think that was trained in as a "watermark", apparently the same is true with the em-dashes, to "ease people into AI" or something.
> So one of the big efforts that we're making for Unreal Engine 6 is improving the networking model, where we both have servers supporting lots of players, but also the ability to seamlessly move players between servers and to enable all the servers in a data center or in multiple data centers, to talk to each other and coordinate a simulation of the scale of millions or in the future, perhaps even a billion concurrent players. That's got to be one of the goals of the technology. Otherwise, many genres of games just can never exist because the technology isn't there to support them. And further, we've seen massively multiplayer online games that have built parts of this kind of server technology. They've done it by imposing enormous costs on every programmer who writes code for the system. As a programmer you would write your code twice, one version for doing the thing locally when the player's on your server and another for negotiating across the network when the player's on another server. Every interaction in the game devolves into this complicated networking protocol every programmer has to make work. And when they have any bugs, you see item duplication bugs and cheating and all kinds of exploits. Our aim is to build a networking model that retains the really simple Verse programming model that we have in Fortnite today using technology that was made practical in the early 2000's by Simon Marlow, Simon Peyton Jones and others called Software Transactional Memory.
reply