Machine/Deep Learning and the Direction of the Gaming Industry

正在查看此主题的用户

It's still a result of bunch of steps on which machine compares the results with the test inputs. It's not mystical powers, just bunch of gates through which some results are let and some are not. It's an implementation of thought experiment with monkeys and typewriters, only this time monkeys are super fast and silicon-based and you don't have to look for pages decently close to Shakespeare's dramas by hand.

What you'll get on the output is sea of game-like programs following certain recognisable patterns without rhyme or reason - unless supervised. Sure, wages can be added to 'tell' it what is desirable and what is not. But at what point you'll stop? And how much of it is machine working on its own?

And what is impossible in putting two patterns together? All it proves is that we certainly have impressive computing power.
 
( ͡° ͜ʖ ͡°)
I cannot have a conversation with you when you are using words like 'gates' and utilizing the 'monkeys with typewriters' analogy because this is inaccurate.
You also reference 'super fast silicon-based' hardware but this is irrelevant when we are discussing software.
Additionally you seem to think this software is put together by brute strength.
Its not. ( ͡° ͜ʖ ͡°)
The code that goes into neural networks is beautifully short.
I hope that you read more about neural networks, they are not what you think.
( ͡° ͜ʖ ͡°)
 
Oh yes, I'm sorry, in my analogy every so often the monkeys furthest from desired effect are replaced.

You still need to tell it what the desired effect is. It won't spew out something original, because that's something you cannot teach it how to do. You can't attach values to something yet to exist and at the end of the day it's 'learning' through comparing values.
 
( ͡° ͜ʖ ͡°)
What you are saying is an attempt to compensate for your lack of knowledge in the field which is understandable.
You are drawing on simplicities to explain complexities. But it is rife with failure.
For instance, it doesn't matter how many monkey's you replace, the monkey itself cannot 'learn' or 'think' (in the order required to complete a single comprehensive work).
Don't mistake the improbability of writing a Shakespearean work with the complexity of understanding English grammar.
We are talking about the latter. The machine learns to represent intrinsic symbols for externalities (this is called classification if you were interested).
This is why your analogy fails since the monkeys never learn English grammar even if they did happen to type an entire Shakespearean piece.
Do not look here 说:
It won't spew out something original, because that's something you cannot teach it how to do.
Did you not see the example with Goofy and van Gogh?
Is the output not an original? It has never happened before in nature or through artificial means.
( ͡° ͜ʖ ͡°) It seems original to me.
Do not look here 说:
You can't attach values to something yet to exist and at the end of the day it's 'learning' through comparing values.
This is unfortunately meaningless unless further explained.
But I will save you the time and energy.
https://en.wikipedia.org/wiki/Decision_boundary
( ͡° ͜ʖ ͡°)
 
All right, I admit my defeat when I see it.

I still think calling it 'no longer crafted by hand' is overestimation of capabilities of neural network to work unsupervised.
 
( ͡° ͜ʖ ͡°)
Yes, we will have to wait to see who is in the right on this issue.
( ͡° ͜ʖ ͡°)
 
Double post magic. ( ͡° ͜ʖ ͡°)

I think this thread will die now.
But I wanted simply to bring some new aspect that seem to be emerging to your attention.
I might update if I find something worthy of discussion and I hope I've stirred some interest in this field.
 
I saw a thread labelled "Amontadillo" and thought, "This could be cool."  Imagine my disappointment.

Imagine my further disappointment when I find myself in agreement with Docm.  :facepalm:
 
Omzdog 说:
We are talking about the latter. The machine learns to represent intrinsic symbols for externalities (this is called classification if you were interested).
This is why your analogy fails since the monkeys never learn English grammar even if they did happen to type an entire Shakespearean piece.

This is exactly what a computer ends up doing, though. It just spams solutions until one of them is closer to "correct", whatever "correct" is defined as. But in the case of creating a video game, "correct" is ambiguous. The computer did not learn English grammar. It still cannot just form sentences on its own will. Look at all the chat-bots that already exist. They appear to use English grammar and learn, but it's extremely easy to fool them into nonsensical responses because they don't actually understand what they're saying, only input -> processing -> output.

So assuming someone is able to create a magical algorithm that can take a 2 existing video games, stick them together, and output a result with random changes. How does it decide if it's better? How does it decide if it makes sense? These are problems a computer currently cannot solve, that's why things like the halting problem exist. They lack a certain something that isn't really definable without going into philosophical stuff... "motivation" would be a decent description.
 
( ͡° ͜ʖ ͡°)
Computers (hardware) don't 'spam solutions' they are predefined to compute an extremely large amount of simple arithmetic.
These calculations are exact not approximations.

So far no neutral network chat bot has been created that I know of.
So this is a poor example of the limitations of our technology.
But this is an excellent proposal that I'm sure will be carried out in future.
Also a point to make: this 'AI' chat bot doesn't learn.
It tries to arrange words into comprehensible sentences.
Their failure is not to be mistaken with the machines inability to produce meaningful language.
Splintert 说:
How does it decide if it's better?
That is the point of supervision.
Supervision is another way of coordinating an arbitrary goal.
Splintert 说:
How does it decide if it makes sense?
'Sense' is another way of deciphering if an entity has syntactic value.
You are bringing up what has already been discussed in this thread.
I'm not going to go into detail about syntax but it is the crux of your confusion.
Splintert 说:
These are problems a computer currently cannot solve
Believe it or not, they already are, its a question of the software not the limitations of the hardware.
Splintert 说:
They lack a certain something that isn't really definable without going into philosophical stuff... "motivation" would be a decent description.
No, the lack is not motivation. Today the lack is that nobody has tried it and that it might not be worth to try it since games are complex proprietary products with no singular data representation.
( ͡° ͜ʖ ͡°)
 
Omzdog 说:
Believe it or not, they already are, its a question of the software not the limitations of the hardware.

Except not. https://en.wikipedia.org/wiki/Halting_problem

It's literally unsolvable.

To your last point, that is not what I meant. I mean the computer lacks the motivation to give itself inputs. A person can just out of nowhere motivate itself to go do something to improve its situation, a computer only responds.
 
( ͡° ͜ʖ ͡°)
You feed the machine input.
It doesn't need motivation to find data.
But what you are saying is nonsensical so I won't entertain it further.

Also the Halting Problem is irrelevant.
In fact I have no idea why you are bring it up,
other than to sound well versed in computational complexity theory.( ͡° ͜ʖ ͡°)
The order in which the machine is classifying features is P =/= NP as we are simply manipulating bits of data with an activation function,
although I will admit fuzzy logic can be the undoing of some networks, yet there are workarounds. I hope I've said this correctly.
( ͡° ͜ʖ ͡°)
 
Now that I have a minute I can discuss in more depth.

Omzdog 说:
( ͡° ͜ʖ ͡°)
Computers (hardware) don't 'spam solutions' they are predefined to compute an extremely large amount of simple arithmetic.
These calculations are exact not approximations.

Yes, they do. The Mar/IO video describes the process. It randomly tries to insert new inputs into the process to produce a more desirable output which again in the case of Mar/IO is clearly defined, get to the end of the level.

Omzdog 说:
So far no neutral network chat bot has been created that I know of.
So this is a poor example of the limitations of our technology.
But this is an excellent proposal that I'm sure will be carried out in future.
Also a point to make: this 'AI' chat bot doesn't learn.
It tries to arrange words into comprehensible sentences.
Their failure is not to be mistaken with the machines inability to produce meaningful language.

A simple google search might do you some good. http://neuralconvo.huggingface.co/ Their failure to produce meaningful language is a good example of their failure to produce a meaningful language.

Omzdog 说:
That is the point of supervision.
Supervision is another way of coordinating an arbitrary goal.

At which point you have a supervisor picking results from a random game generator. That's all it amounts to. This is not a very good way to give a computer a clearly defined goal, which neural networks require in order to function.

Omzdog 说:
'Sense' is another way of deciphering if an entity has syntactic value.
You are bringing up what has already been discussed in this thread.
I'm not going to go into detail about syntax but it is the crux of your confusion.

Sense is an arbitrary goal, not a clearly defined one, which is exactly the issue. It's nothing to do with the implementation details or what the input or output, it's the mechanism that is flawed. Take for example the ability for a neural network to create a sorting algorithm based on examples, which a glance at Wikipedia will tell you it can do. Why does that work? Because the output is well defined. A sorted list always looks the same as every other sorted list, [ x, y, z....] where x < y < z. In the case of Mar/IO, according to the computer, any solution to the level is equivalent. Even if you add more variables like time to completion to optimize to the best solution, it all ends up being a clearly defined goal that the computer can optimize towards. A concept as complex as 'create a game' cannot be optimized because a game cannot be objectively better than another.
 
Splintert 说:
Yes, they do. The Mar/IO video describes the process. It randomly tries to insert new inputs into the process to produce a more desirable output which again in the case of Mar/IO is clearly defined, get to the end of the level.
Computer doesn't equate to the software, so lets not argue semantics (non-academic meaning of semantics).
Computers compute. The software is different from the computer.
Splintert 说:
A concept as complex as 'create a game' cannot be optimized because a game cannot be objectively better than another.
I discussed the hypothetical of ludometry on the first page.
Splintert 说:
A simple google search might do you some good. http://neuralconvo.huggingface.co/ Their failure to produce meaningful language is a good example of their failure to produce a meaningful language.
This is a nice find. Part of the problem is that they spit out meaningful sentences yet aren't trained to produce meaningful pragmatics.
So there is some wiggle room as to the power of the technology. I assume that's because they never bothered to implement a pragmatics layer in the network or something like that.
But such a thing would surely not be easy. We'd need an AI that codes for the AI to do that or something.
Very nice.
( ͡° ͜ʖ ͡°)
 
We are not arguing semantics. I am saying in plain terms that 'create a game' has no clearly defined goal and as such cannot be solved by a neural algorithm. You've yet to provide a single example that demonstrates it is possible, only meaningless conjecture.

You cannot mathematically define fun. Simple-as. Unsolvable. I have provided examples of a problem with a mathematical definition of the goal and that is why a computer can solve it. Because all it is doing is optimizing variables. Computers are rather good at that.

So you're saying we need the problem to be solved in order to solve the problem? Doesn't sound too hopeful.
 
后退
顶部 底部