Category: Vimeo

Vimeo favourites.

Journey through the layers of the mind

first tests playing with #deepdream #inceptionism

A visualization of what’s happening inside the mind of an artificial neural network.

By recognizing forms in these images, your mind is already reflecting what’s going on in the software, projecting its own bias onto what it sees. You think you are seeing things, perhaps puppies, slugs, birds, reptiles etc. If you look carefully, that’s not what’s in there. But those are the closest things your mind can match to what it’s seeing. Your mind is struggling to put together images based on what you know. And that’s exactly what’s happening in the software. And you’ve been training your mind for years, probably decades. These neural networks are usually trained for a few hours, days or weeks.

————————————————————

In non-technical speak:

An artificial neural network can be thought of as analogous to a brain (immensely, immensely, immensely simplified. nothing like a brain really). It consists of layers of neurons and connections between neurons. Information is stored in this network as ‘weights’ (strengths) of connections between neurons. Low layers (i.e. closer to the input, e.g. ‘eyes’) store (and recognise) low level abstract features (corners, edges, orientations etc.) and higher layers store (and recognise) higher level features. This is analogous to how information is stored in the mammalian cerebral cortex (e.g. our brain).

Here a neural network has been ‘trained’ on millions of images – i.e. the images have been fed into the network, and the network has ‘learnt’ about them (establishes weights / strengths for each neuron). (NB. This is a specific database of images fed into the network known as ImageNet http://j.mp/1NLTioT )

Then when the network is fed a new unknown image (e.g. me), it tries to make sense of (i.e. recognise) this new image in context of what it already knows, i.e. what it’s already been trained on.

This can be thought of as asking the network “Based on what you’ve seen / what you know, what do you think this is?”, and is analogous to you recognising objects in clouds or ink / rorschach tests etc.

The effect is further exaggerated by encouraging the algorithm to generate an image of what it ‘thinks’ it is seeing, and feeding that image back into the input. Then it’s asked to reevaluate, creating a positive feedback loop, reinforcing the biased misinterpretation.

This is like asking you to draw what you think you see in the clouds, and then asking you to look at your drawing and draw what you think you are seeing in your drawing etc,

That last sentence was actually not fully accurate. It would be accurate, if instead of asking you to draw what you think you saw in the clouds, we scanned your brain, looked at a particular group of neurons, reconstructed an image based on the firing patterns of those neurons, based on the in-between representational states in your brain, and gave *that* image to you to look at. Then you would try to make sense of (i.e. recognise) *that* image, and the whole process will be repeated.

We aren’t actually asking the system what it thinks the image is, we’re extracting the image from somewhere inside the network. From any one of the layers. Since different layers store different levels of abstraction and detail, picking different layers to generate the ‘internal picture’ hi-lights different features.

————————————————————
All based on the google research by Alexander Mordvintsev, Software Engineer, Christopher Olah, Software Engineering Intern and Mike Tyka, Software Engineer

http://j.mp/1NLTkwU

http://j.mp/1NLTkwV

(View on Vimeo)

Colossus by Pat Vale

follow me on Instagram – http://j.mp/1IJwVkT
follow me on facebook – http://j.mp/1KZmR4C

Massive thanks to John Barber for the incredible musical score. Hear more of his work at http://j.mp/1IJwWFv and http://j.mp/1IJwWFz.

Thanks to Colorist Daniel Silverman at MPC (and Ariella Amrami)

Dom del Torto at Big Animal

Colossus is a drawing that I made in New York during December 2014.

(View on Vimeo)

The Fallen of World War II

An animated data-driven documentary about war and peace, The Fallen of World War II looks at the human cost of the second World War and sizes up the numbers to other wars in history, including trends in recent conflicts.

Visit http://www.fallen.io for the interactive version and more information

Written, directed, coded, narrated by https://twitter.com/neilhalloran
Sound and music by https://twitter.com/Dolhaz

(View on Vimeo)

Lunar economic zone by Zhan Wang

See more architecture and design movies at dezeen.com/movies

Architectural Association graduate Zhan Wang has produced an animation depicting a fictional technotopian future scenario in which China has built a giant port to distribute minerals mined from the moon.

Zhan Wang’s Lunar Economic Zone project imagines a celebration taking place in Shenzhen in the year 2028 to mark the arrival of the first shipment of lunar minerals.

The animation portrays the architecture and infrastructure required by such a system and the way the parade might be propagandised to present China’s technological and economic prowess through the lens of the global media.

See the full story on Dezeen: http://j.mp/1AcZmVt

(View on Vimeo)

ATROPA — Sci-fi Short

An Off-World Detective investigates the missing research vessel ATROPA. Concept short inspired by ’70s and ’80s sci-fi classics like Alien and Blade Runner.

Directed by Eli Sasich • Written by Clay Tolbert
(Agent) Trevor Astbury • CAA
(Manager) Eric Williams • Zero Gravity Management

Produced by Chris Bryant
Co-Produced by Lieren Stuivenvolt Allen
Director of Photography Greg Cotten
Production Designer Alec Contestabile
Edited by Zachary Anderson
Music by Kevin Riepl
Visual Effects by Ryan Wieber
VFX Supervisors Ryan Wieber & Tobias Richter
Spaceship VFX by Tobias Richter & The Light Works

Starring:
Anthony Bonaventura
Jeannie Bolet
David M. Edelstien
Ben Kliewer
Chris Voss

Making-of interview: http://j.mp/1zSsIrv

© Corridor Productions 2015

(View on Vimeo)

Teacher of Algorithms

Speculative Vision commissioned by http://j.mp/1K9rSa9

We think about smart/learning objects as yet not finished entities that can evolve their behaviors by observing, reading and interpreting our habits. They train their algorhitms based on deep learning or similar ways to constantly adapt and refine their decisions.
But What if things are not that good at learning after all without some help?
What if in a near future, i could just get a person to train my products better as i do with pets?
Why having to deal with the initial problems and risk that it might even learn wrongly?
So, What if there would be an algorithm trainer?

Thanks to:
Azyre Yang as The Teacher
Carmelo Ferreri as The coffemachine guy

Shot by Andrea Carlon
Story and Edit by Simone Rebaudengo
Sound by Daniel Prost
Music: Lovely Nanjing

(View on Vimeo)

The Forensic Photographer

Nick Marsh has been a Forensic Photographer for over 20 years. Due to budget cuts and affordable digital technology, it’s fast becoming a dying craft. This is an insight into his work and what it means to him.

Director / Editor – David Beazley
Cameraman – Max Brill
Sound Design – Andrew Deme
Colourist – Brendan Buckingham @ The Mill

Thanks to:
Nick Marsh
Judy Kerr
The London MET Police

(View on Vimeo)

FITC Tokyo 2015 Titles

Now in its sixth year, FITC Tokyo 2015 consists of presentations from some of the most interesting and engaging digital creators from all around the world. To commemorate FITC Tokyo’s inaugural title sequence we sought to encapsulate the city itself—distilled to graphic form. Aiming to contrast the harmonies of traditional Japanese culture against the backdrop and sensory overload of present-day Tokyo, we meticulously crafted elegant typographic forms to collide with abrasive, overstimulating glitch—giving way to a progressive journey where moments of extreme chaos fold into temporary tranquility.

Credits
Director: Ash Thorp
Producer: Andrew Hawryluk
Art Director: Michael Rigley
Type Designer: Nicolas Girard
Designers: Ash Thorp, Michael Rigley, Nicolas Girard
Type Animators: Nicolas Girard, Alasdair Willson
Animators: Michael Rigley, Chris Bjerre, Andrew Hawryluk
Computational Artist: Albert Omoss
Process Reel Editor: Franck Deron
Composer: Pilotpriest

Links
Making-of: http://j.mp/1E48S9Y

Ash Thorp: http://ashthorp.com/
Andrew Hawryluk: http://andrewh.tv/
Michael Rigley: http://j.mp/1A9yZPs
Nicolas Girard: http://worship.to/
Alasdair Willson: http://j.mp/1E48Sa1
Chris Bjerre: http://chrisb.tv/
Albert Omoss: http://j.mp/1A9yZPt
Franck Deron: http://j.mp/1A9z2ef
Pilotpriest: http://j.mp/1A9yZPw

(View on Vimeo)