Connect with us

Gaming

New Technologies in Modern Games

Published

Epic Games has presented Unreal Engine 5. And this presentation left very few people indifferent. Billions of polygons in the frame, rendering one polygon per pixel, global illumination system – there are so many graphical (and some sound) features that we already want to dive into such detailed games. 

All of this is almost becoming the present for us, which ten years ago was the future. Back then, the picture in games like Crysis 2, The Last Guardian, Bioshock 2, or Gran Turismo 5 was perfect, and the graphics that we see now “in real-time” in UE5, ten years ago you could see only in CGI clips. But it is not only graphics that can develop games. And now, we will try to talk briefly about what we can see in games in another ten years. And if you do not want to delve into the novelty of offline games and their development, then play online casino will be a good solution.

Among the new technologies that have developed are:

  • Muscle simulation in games;
  • Sound creation;
  • Artificial intelligence;
  • The intelligence of opponents

Sound Generation

Sound for games is either recorded with microphones (steps on surfaces, sounds of firing weapons for E.F.T. as an example) or created artificially with special tools (creating sounds for tanks that don’t exist in reality in WoT). However, there is a third way. It is called “Toward Wave-based Sound Synthesis.” The essence of technology is simple. Imagine that you have an animation or game scene in which your character pours water into a container. You need to record the actual sound of water flowing for the stage. But with Toward Wave-based Sound Synthesis, the sounds are synthesized automatically.

The sound will changedepending on the height of the water pouring. The speed of the water pouring, the material the container is made of, etc. Another example. Imagine that you recorded a few sentences of one of the characters and then decided that the script should cover his head with a bucket. The sound from under the bucket would be different, and you’d either have to re-record everything or modify the character’s recorded speech in an audio editor. In the case of “Toward Wave-based Sound Synthesis,” if the character’s head is covered with a bucket, the sound will automatically sound like it came from under the bucket. 

However, for all its advantages, this technology has one disadvantage: it consumes many resources. A lot. It took a 32-core processor 19 hours to generate the sound of a drop of water. And what happens when there are dozens or hundreds of different sounds in a scene? No PS5 is good enough for that. So if this feature is going to appear in games, it’s not going to happen very soon.

Deformable simulations

Destructible objects in games are one of those things that make the game world (especially the open world) more accurate. Many people probably remember Red Faction, Crysis, and Battlefield: Bad Company games with a certain degree of destructibility of inanimate objects. The destructibility of living things is in “M.G.S.: Revengeance.” But these are isolated examples. You can’t destroy a building exactly the way you want or split a goblin with an ax at a certain angle. If it’s not in the game, you can’t do it. If it’s in the game, you can’t do it, but only how the developer intended.

To a certain point, deformable simulations “in real-time” was impossible because the calculation of such things was very long and measured in “seconds per frame.” However, with the algorithm “A Scalable Galerkin Multigrid Method,” deformable simulations can be run even on your graphics card. Hence, it can render deformable simulations in real-time (up to 40 frames per second). 

The main advantage of this method is its customizability. For example, the amount of object geometry can be changed, and the rendering speed of a single object can be brought up to 180 frames per second. So we are waiting for deformable simulations in the games of our future.

Muscle simulation

What is a character in a game? Just a “skeleton” wrapped in polygons with hitboxes hanging on it. Yes, many games have realistic physics of the human body (and how it reacts to bullets and explosions), but that’s all half-measures. The “Volume Invariant Position-based Elastic Rods” technology (VIPER) allows you to create muscles. You do not have to be a Master of Muscular Sciences and know the difference between biceps and quadriceps – VIPER will do all the work for you. Then, you attach the muscle to the skeleton, and it starts to move with the power in mind. VIPER also knows how to stimulate muscle growth (for those who want to grow their Hulk). And thirdly, thanks to muscles, we get more accurate soft-body physics.

How complex is this simulation? It is now available “in real-time. In other words, this technology is very efficient, so the muscle simulation takes only a few milliseconds. And even if there are a few dozen different muscle objects in the scene, it will take no more than 10 ms to simulate them.

Allies and opponents

Games are a competition to win. And if in online games we are fighting with people (often toxic), in single-player games, the role of the enemy is given to the computer. Usually, few people care about really clever A.I. that is interesting to play against (like bots in F.E.A.R.). So they do it by the system “that’s fine!”. And it’s not even about making complex A.I. – everyone can tweak the reaction speed parameters of bots faster than those of cyber bots. But making a humanoid A.I. is a more difficult task.

Many of you have probably heard about AlphaStar. This A.I. system from DeepMind (Google) was successfully tested in Starcraft 2 against cyber athletes and regular players and reached the rank of “Grandmaster” in no time. Even some casinos with online Teen Patti real money tried this technique.

Music

Fortunately, music remains out of reach (for now) for neural networks. Outdoing Jesper Cude or Mick Gordon in terms of music is no way to beat the Grandmasters in SC2. But stubborn neural nets are already taking their first steps into music land. “Jukebox can generate a track that never existed before, based on a small snippet of the original track. Yes, the generated piece of “music” sounds rather sloppy. But maybe in 10-15 years, a neural network will automatically create epic tracks during battles and squeeze out a tear with violin and piano during dramatic moments. Unique, never-before-existing mixes of specific instruments in a particular genre will be created with a couple of buttons.

Photorealistic Image

Few people today dare to release a game with a mediocre picture. Photorealism has long ceased to be a “highlight” for developers and is taken for granted. Some games are almost indistinguishable from real life – the perfect hit here prevents the so-called “uncanny valley” effect when the most similar to the human-computer characters begin to repel the player unwittingly. There are many sites on the Internet where users are asked to guess what’s in front of them – a screenshot from a video game or a photograph of a real place.

Hands-free control

Some developers think it’s boring to control a game the old-fashioned way. So Microsoft released the controversial but interesting Kinect. Sony offered the PlayStation Move motion capture system. The American company NeuroSky went the furthest – and removed a neural helmet. The device captures brain waves and converts them into commands in a video game. 

The device is not perfect yet – for a successful game, you need to clear your mind and concentrate properly, which is not always possible. The gesture capture systems, in turn, are not accurate enough and boast a minimal number of games. But a start has been made.

Trending