Duck house on the lake: NFT Art
7 comments
I brought along my special camera out to a national park in the Northwest tip of Georgia. Coming across a lake I see a duck house on the waters edge. It was an overcast day and it was just me and my friend walking around the lakes. I used my modified Lumix GH3 and used an ultraviolet bandpass filter. It was a cool photo but I knew my style transfer tools could transform it into something like art.
Taking one of my ultraviolet photos and stylized it using a technique called "Style Transfer". Basically I use a computer algorithm to take creative elements from a certain artist and imprint it on a picture I took. So once real photos are now turned into art by my graphics card and the use of a neural network trained to recognize shapes, lines and objects.
Anders Askevold was a Norwegian painter. His art had traits of romantic national expressions and was known for his fjord landscape paintings from Western Norway. I used one of his paintings as the art style source and is linked below for reference. The art piece used for the style is named "Norwegian Fjord Landscape".
I also included an unlockable that has a bonus style transfer using La maison du pêcheur, Varengeville by Claude Monet as the style image for the render.
Three editions will be for sale for 27 Hive, or a little under $3 at the time of this post.
Ultraviolet photo used as content for the render
Camera Model | Lumix GH3, modified by LifePixel for Full Spectrum |
---|---|
Lens | Olympus M.ZUIKO DIGITAL ED 12mm-50mm |
Filter | B+W UV Black (403) Filter |
Aperture | f/3.6 |
Shutter Speed | 1/40 sec |
Film Speed | 800 |
Spectrum | Ultraviolet-A and Infrared (UVA and IR) |
Wavelength | 320 to 385 nm and 750 nanometers |
Location | Rome Georgia USA. |
Included above is a link to the software I use to turn my real life photos into artwork. I try out dozens of artists style before I find a good type that mixes with my content of my photos. Neural style transfer is an optimization technique used to take 2 images, a content image and a style reference image such as artwork from a painter and blend them together so the output image looks like the content image, but painted in the style of the style reference image.
This is implemented by optimizing the output image to match the content details of the content image and the style details of the style referenced image. These details are extracted from the images using a convolutional network.
Source: https://www.tensorflow.org/tutorials/generative/style_transfer
Comments