udio output was disabled: as such, the sensory input wasequated between human player and agents.
It's interesting that this extra feature is assumed to withhold valuable information from both the agent and humans.
What are people's thoughts on this, would an audio input help the learner? What do we think would work better, adding this to the output of the convolution layer?
When the environment(game) is in a 'loud, high pitch' state watch out!