Machine/Deep Learning and the Direction of the Gaming Industry

Users who are viewing this thread

omzdog

Grandmaster Knight
( ͡° ͜ʖ ͡°)
Let's speculate and wonder.
Basically I wish to draw your attention to, for a short while, imagine a time in which video games are no longer 'crafted by hand'.
Machine learning is an ever expanding field with concepts such as machine vision and decision boundary thrown around.
Its being introduced to video games, some of you may have seen the MarI/O
More and more the field grows.

Imagine the amount of labor lifted from creating video games (among other things) and the increase in quality.
Think about the possibility of NPC's that learn along with you.
Venture on modding by the swipe of a hand.

Couple this with non-machine learning advances:
remarkable animations.
realistic sounds in 3D space.
photo-realistic graphics.

So what are your thoughts?
( ͡° ͜ʖ ͡°)
 
The amount of shovelwares we'd have would make the gulags look like a sandbox.
 
Ain't how it works. AI is good at solving well defined problems with rules. Making a game is a creative process.. there is no way for the computer to know if a change is considered good or not because there is no way for it to measure.
 
Splintert said:
Ain't how it works. AI is good at solving well defined problems with rules. Making a game is a creative process.. there is no way for the computer to know if a change is considered good or not because there is no way for it to measure.

I dunno, I feel that if it learned based on observing or being told what a specific designed or producer or content creator think looked good, it could perhaps weigh it's options against the patterns it discovers. Obviously they wouldn't be "objectively" good changes, but it could perhaps provide "subjectively" good changes.
 
Splintert said:
Ain't how it works. AI is good at solving well defined problems with rules. Making a game is a creative process.. there is no way for the computer to know if a change is considered good or not because there is no way for it to measure.
Should be a trivial task for the game to send telemetry to a server on everything the players do, how long they play, if they keep coming back etc.
Of course, then the result would be a really short and easy game, but baby steps.
 
I too like watching Youtube.

I'm not very optimistic in terms of AI. Even the most complex AI in a video game is still hopelessly simple.
It's till Pacman ghosts following you around, just in large 3D worlds.

Some of the videos just look like ads for Nvidia and Cryengine etc.  :neutral:
 
Adorno said:
I too like watching Youtube.

I'm not very optimistic in terms of AI. Even the most complex AI in a video game is still hopelessly simple.
It's till Pacman ghosts following you around, just in large 3D worlds.

Some of the videos just look like ads for Nvidia and Cryengine etc.  :neutral:

Yet. There is still room, though I wager a lot will come down to being connected to the internet. Cloud based machine learning servers would help AI in games to no end.
 
I've seen how easy it is to mess up with artificial neural network if you're intentional about it. The input you're teaching it with still has to be tailored carefully and you still need people to supervise it and undo the possible mistakes. And it's still a system that cannot work outside of the pattern.

What I'm trying to say here is that you still need a septic environment for it to work in. You're not really getting rid of work that needs to be done, you're just changing its nature.
 
Vieira said:
I dunno, I feel that if it learned based on observing or being told what a specific designed or producer or content creator think looked good, it could perhaps weigh it's options against the patterns it discovers. Obviously they wouldn't be "objectively" good changes, but it could perhaps provide "subjectively" good changes.

At which point you have a designer telling something else what to make, much like a publisher telling a development studio what to make, which is literally exactly what we have right now. AI is already being used to create games: http://www.polygon.com/2014/1/12/5295980/how-ai-game-developer-angelina-could-change-the-industry and they are garbage. Computers don't have 'fun', they don't understand the concept, so they cannot currently know when a feature is a step forward, backwards, or even if it makes sense.

The only practical application of neural networks on gaming is making competitive PvP-like AIs. Having an AI play against other players thousands of times in a game like Total War could eventually, with some prodding, create a not only competent AI but an optimal one that never makes mistakes and plays perfectly. Is that even a good thing?
 
( ͡° ͜ʖ ͡°)                                          ( ͡° ͜ʖ ͡°)                                            ( ͡° ͜ʖ ͡°)

The unfortunate thing about ANGELINA is that it uses computational evolution.
Which has been proven to be a very ineffective means of reflecting real world processes.
It has nothing to do with neural networks which is what concerns deep learning, a much more powerful process.
Additionally I don't think its relevant that a computer (software) doesn't know how to have 'fun'.
As you saw with the MarI/O example the software finds exploits that aid its steps to a supervised goal.
It never cared that it was having 'fun' with the game or not as a player.
You get?

( ͡° ͜ʖ ͡°)                                          ( ͡° ͜ʖ ͡°)                                            ( ͡° ͜ʖ ͡°)
 
Warning - while you were typing mcwiggum has posted some nonsense that shows he's a luddite. You may wish to review your post.
( ͡° ͜ʖ ͡°) Your beginning to think Untitled. ( ͡° ͜ʖ ͡°)
If we, for a minute, attempt to unwrap the idea of 'fun' we might point to some examples concerning the use of time and how well spent it was.
In MarI/O, the software found ways to exploit the game that the developers didn't even know about.
Why then should we assume that humans have a monopoly on 'fun', a concept with a finite definition?
Why should we assume anything concerning how traditional software development can overlap with 'fun'?
Ljas said:
Should be a trivial task for the game to send telemetry to a server on everything the players do, how long they play, if they keep coming back etc.
Of course, then the result would be a really short and easy game, but baby steps.
This is a good example of how to implement the supervision in the output.
Keeping in mind that this thread is entirely hypothetical we might call it fun-o-metrics. ( ͡° ͜ʖ ͡°)
The latin word for 'game' is 'ludus' I believe. Perhaps ludometry.
( ͡° ͜ʖ ͡°)        ( ͡° ͜ʖ ͡°)        ( ͡° ͜ʖ ͡°)
 
Omzdog said:
Why then should we assume that humans have a monopoly on 'fun', a concept with a finite definition?
I'm not knowledgeable about the subject at all, but I'd imagine it is because humans have the capacity to understand meaning while machines, even though they would appear to mimic meaning, only 'understand' syntax.

https://plato.stanford.edu/entries/chinese-room/
 
I study linguistics and understand the concept of the Chinese Room.
Neural Networks work a lot like that, they even have what's called a 'hidden layer',
which the programmer doesn't necessary code for but the machine adjusts weights to mimic the learning of the human brain.

I think I can put it best by quoting Edsger W. Dijkstra a famous computer scientist.
"The question of whether machines can think... is about as relevant as the question of whether submarines can swim."

Submarines aren't necessarily emulating fish, a life form that has adapted quite adequately to life under water,
but they can still get you from port to port, so the semantics of whether it can 'swim' is not pertinent.
( ͡° ͜ʖ ͡°)                                                      ( ͡° ͜ʖ ͡°)                                                                    ( ͡° ͜ʖ ͡°)
 
Neural networks aren't magic, this is where semantics are super important. They don't 'learn' in traditional meaning of the word. You can 'teach' them how to solve a problem and then, with supervision to avoid deviations, they can learn how to solve the problem most optimally. They still can't state the problem while unsupervised and won't be able to do so.

This is a very nice quote, but it suggests that they're capable of something akin to thinking. It's next level of working in patterns that computers were doing since the idea was conceived, but it's still working in patterns. You just add more computing power to it.
 
( ͡° ͜ʖ ͡°)
Let us assume that a bunch of data scientists got together and actually attempted to create machine learned video game software.
They don't begin with nothing. Already the field has ample examples of human output that serve as a data set input for the machine.
It would be difficult but all they'd need to do initially is feed the machine some bit representation of what a 'video game' is and then the machine begins learning the syntax or constraints.
This syntax is unknown to the programmers but it (the machine) then spits out games of its own, bad to start with, but as the outputs are massaged and weighted through techniques like back-propagation, the output improves.
( ͡° ͜ʖ ͡°)
Whether you believe this is 'thinking' or not is not the point of this thread and I only brought up Dijkstra to satisfy Untitled's philosophical inquiry.
Already we have examples of machines spitting out what we thought was impossible. Here is a good example:
2EDD08B100000578-3336755-image-a-35_1448662048148.jpg

Here we see two inputs, one stylistic the other used as a base. The machine doesn't need to understand that one is Goofy and the other is a Van Gogh.
The machine is able to strip 'features' from these and utilize its internally learnt syntax to produce a work of its own.
It would be a few steps ahead to replace simple pixels with bits of game data.
( ͡° ͜ʖ ͡°)
 
Back
Top Bottom