Tagged: Video

Exoplanet β Pic b orbiting β Pictoris

A series of images taken between November 2013 to April 2015 with the Gemini Planet Imager (GPI) on the Gemini South telescope in Chile shows the exoplanet β Pic b orbiting the star β Pictoris, which lies over 60 light-years from Earth. In the images, the star is at the centre of the left-hand edge of the frame; it is hidden by the Gemini Planet Imager’s coronagraph. We are looking at the planet’s orbit almost edge-on; the planet is closer to the Earth than the star. The images are based on observations described in a paper published in the Astrophysical Journal, 16 September 2015 and whose lead author is Maxwell Millar-Blanchaer. GPI is a groundbreaking instrument that was developed by an international team led by Stanford University’s Prof. Bruce Macintosh (a U of T alumnus) and the University of California Berkeley’s Prof. James Graham (former director of the Dunlap Institute for Astronomy & Astrophysics, U of T). Image credit: M. Millar-Blanchaer, University of Toronto; F. Marchis, SETI Institute

(View on Vimeo)

The Guardian

Annecy International Film Festival 2015 – Official Selection / Arizona Film Festival 2015 – Official Selection / CIFF 2015 – Royal Reel Award – 2015 / River Film Festival 2015 – Official Selection

Making-of: http://j.mp/1M6u3xg
Trailer: http://j.mp/1OpXrkJ

The Guardian

The Guardian is a free interpretation of the parable “Before the Law” from Kafka’s book “The Trial”. A peasant after traveling the world arrives in front of a gate, controlled by a fearsome Guardian. The peasant tries to pass through but the Guardian denies him entrance. Peasant and Guardian are the same character, the peasant, like each one of us, in front of his own fear; the guardian, something shapeless, that surround and control him.
the Door/Gate the possibilities we encounter during our life.


N9ve – http://www.n9ve.it

Alessandro Novelli

Alessandro Novelli
Victor Perez
Andrea Gendusa

Illustration and Design:
Alessandro Novelli
Karolina Pospischil http://j.mp/1M6u3xj
Andrea Gendusa http://j.mp/1OpXpcC

Victor Perez – Character Animator, 3D Modeling
Alessandro Novelli – 3D modeling, 3D animation
Gabriele Maiocco – 3D heads modeling and Zbrush artist http://j.mp/1M6u6co
Andrea Gendusa – 3D modeling, 3D animation http://j.mp/1OpXpcC

Original Music:
Simon Smith http://j.mp/1OpXpcD
Sasha Agranov http://j.mp/1M6u6cp

Voice Over:
Luis De Velasco

Sasha Agranov http://j.mp/1M6u6cp

Music Recording and Mixing:
Eric Nagel – at BCNSound – BCN – http://j.mp/1OpXpcD

Voice Over recording:
Juan José Rodriguez – at Abuela Records – MX

Yves Roussel

Many thanks to:
Leandra Boj http://j.mp/1OpXrB2
Rafael Mayani http://j.mp/1M6u6ct
Xavier Sanchez http://xave.es
and Karolina Pospischil http://j.mp/1M6u3xj
for their posters.

Special Thanks to:
Simon, Sasha and BCNsound (http://j.mp/1OpXpcD) , Laura Sans Gassó, Ana Isabella Byrne Bellorín, Mariana Perfeito, Luis De Velasco

(View on Vimeo)

Farewell – ETAOIN SHRDLU – 1978

A film created by Carl Schlesinger and David Loeb Weiss documenting the last day of hot metal typesetting at The New York Times. This film shows the entire newspaper production process from hot-metal typesetting to creating stereo moulds to high-speed press operation. At the end of the film, the new typesetting and photographic production process is shown in contrast to the old ways.

There are interviews with workers at NYT that are for and against the new technology. In fact, one typesetter is retiring on this final day as he does not want to learn the new process and technology.

This is the first time the film has ever been available in HD from the original 16mm master film.

See more printing, journalism, and typographic-related films at: http://j.mp/1hzXoGf

(View on Vimeo)

Journey through the layers of the mind

first tests playing with #deepdream #inceptionism

A visualization of what’s happening inside the mind of an artificial neural network.

By recognizing forms in these images, your mind is already reflecting what’s going on in the software, projecting its own bias onto what it sees. You think you are seeing things, perhaps puppies, slugs, birds, reptiles etc. If you look carefully, that’s not what’s in there. But those are the closest things your mind can match to what it’s seeing. Your mind is struggling to put together images based on what you know. And that’s exactly what’s happening in the software. And you’ve been training your mind for years, probably decades. These neural networks are usually trained for a few hours, days or weeks.


In non-technical speak:

An artificial neural network can be thought of as analogous to a brain (immensely, immensely, immensely simplified. nothing like a brain really). It consists of layers of neurons and connections between neurons. Information is stored in this network as ‘weights’ (strengths) of connections between neurons. Low layers (i.e. closer to the input, e.g. ‘eyes’) store (and recognise) low level abstract features (corners, edges, orientations etc.) and higher layers store (and recognise) higher level features. This is analogous to how information is stored in the mammalian cerebral cortex (e.g. our brain).

Here a neural network has been ‘trained’ on millions of images – i.e. the images have been fed into the network, and the network has ‘learnt’ about them (establishes weights / strengths for each neuron). (NB. This is a specific database of images fed into the network known as ImageNet http://j.mp/1NLTioT )

Then when the network is fed a new unknown image (e.g. me), it tries to make sense of (i.e. recognise) this new image in context of what it already knows, i.e. what it’s already been trained on.

This can be thought of as asking the network “Based on what you’ve seen / what you know, what do you think this is?”, and is analogous to you recognising objects in clouds or ink / rorschach tests etc.

The effect is further exaggerated by encouraging the algorithm to generate an image of what it ‘thinks’ it is seeing, and feeding that image back into the input. Then it’s asked to reevaluate, creating a positive feedback loop, reinforcing the biased misinterpretation.

This is like asking you to draw what you think you see in the clouds, and then asking you to look at your drawing and draw what you think you are seeing in your drawing etc,

That last sentence was actually not fully accurate. It would be accurate, if instead of asking you to draw what you think you saw in the clouds, we scanned your brain, looked at a particular group of neurons, reconstructed an image based on the firing patterns of those neurons, based on the in-between representational states in your brain, and gave *that* image to you to look at. Then you would try to make sense of (i.e. recognise) *that* image, and the whole process will be repeated.

We aren’t actually asking the system what it thinks the image is, we’re extracting the image from somewhere inside the network. From any one of the layers. Since different layers store different levels of abstraction and detail, picking different layers to generate the ‘internal picture’ hi-lights different features.

All based on the google research by Alexander Mordvintsev, Software Engineer, Christopher Olah, Software Engineering Intern and Mike Tyka, Software Engineer



(View on Vimeo)

Colossus by Pat Vale

follow me on Instagram – http://j.mp/1IJwVkT
follow me on facebook – http://j.mp/1KZmR4C

Massive thanks to John Barber for the incredible musical score. Hear more of his work at http://j.mp/1IJwWFv and http://j.mp/1IJwWFz.

Thanks to Colorist Daniel Silverman at MPC (and Ariella Amrami)

Dom del Torto at Big Animal

Colossus is a drawing that I made in New York during December 2014.

(View on Vimeo)

The Fallen of World War II

An animated data-driven documentary about war and peace, The Fallen of World War II looks at the human cost of the second World War and sizes up the numbers to other wars in history, including trends in recent conflicts.

Visit http://www.fallen.io for the interactive version and more information

Written, directed, coded, narrated by https://twitter.com/neilhalloran
Sound and music by https://twitter.com/Dolhaz

(View on Vimeo)

Lunar economic zone by Zhan Wang

See more architecture and design movies at dezeen.com/movies

Architectural Association graduate Zhan Wang has produced an animation depicting a fictional technotopian future scenario in which China has built a giant port to distribute minerals mined from the moon.

Zhan Wang’s Lunar Economic Zone project imagines a celebration taking place in Shenzhen in the year 2028 to mark the arrival of the first shipment of lunar minerals.

The animation portrays the architecture and infrastructure required by such a system and the way the parade might be propagandised to present China’s technological and economic prowess through the lens of the global media.

See the full story on Dezeen: http://j.mp/1AcZmVt

(View on Vimeo)

ATROPA — Sci-fi Short

An Off-World Detective investigates the missing research vessel ATROPA. Concept short inspired by ’70s and ’80s sci-fi classics like Alien and Blade Runner.

Directed by Eli Sasich • Written by Clay Tolbert
(Agent) Trevor Astbury • CAA
(Manager) Eric Williams • Zero Gravity Management

Produced by Chris Bryant
Co-Produced by Lieren Stuivenvolt Allen
Director of Photography Greg Cotten
Production Designer Alec Contestabile
Edited by Zachary Anderson
Music by Kevin Riepl
Visual Effects by Ryan Wieber
VFX Supervisors Ryan Wieber & Tobias Richter
Spaceship VFX by Tobias Richter & The Light Works

Anthony Bonaventura
Jeannie Bolet
David M. Edelstien
Ben Kliewer
Chris Voss

Making-of interview: http://j.mp/1zSsIrv

© Corridor Productions 2015

(View on Vimeo)

Teacher of Algorithms

Speculative Vision commissioned by http://j.mp/1K9rSa9

We think about smart/learning objects as yet not finished entities that can evolve their behaviors by observing, reading and interpreting our habits. They train their algorhitms based on deep learning or similar ways to constantly adapt and refine their decisions.
But What if things are not that good at learning after all without some help?
What if in a near future, i could just get a person to train my products better as i do with pets?
Why having to deal with the initial problems and risk that it might even learn wrongly?
So, What if there would be an algorithm trainer?

Thanks to:
Azyre Yang as The Teacher
Carmelo Ferreri as The coffemachine guy

Shot by Andrea Carlon
Story and Edit by Simone Rebaudengo
Sound by Daniel Prost
Music: Lovely Nanjing

(View on Vimeo)