Neural Image Representation

This project is an experiment in which a single image is used to train a neural network to represent an image. This is achieved by using the horizontal and vertical location of each pixel as input, with the corresponding RGB channels as output. All input and output values are converted into the continuous -1 to 1 range, before calculation. Now that the discrete pixel coordinates are represented by continuous values, interpolation can be used to scale the image larger, or extrapolation can be used to guess what pixels lie outside of the image bounds. The neural networks are trained using a back propagation algorithm, which results in a training plateau once a good model is fit. However, I’m planning on implementing an evolutionary training algorithm, which will be interesting to compare with the result of back propagation.

Here, a neural network with 3 hidden layers of 128 neurons each, is trained using a 16 pixel square image of a snake’s eye. The original training image is compared with both traditional bicubic scaling, and the neurally trained image. Finally, four learning states are shown including out of bounds extrapolation:

 

 

 

An Overlord unit portrait from the 1998 game StarCraft. Bicubic scaling produces a blocky blurry image, while the neural representation produces a clean smooth image with some detail missing. Combining the bicubic and neural images retains most of the bicubic detail, while minimizing blockiness:

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s