Intel demos world’s first ‘walk-around’ VR video experience



At CES 2017, Intel announced a partnership with Hype VR to deliver high-fidelity video capture that allows viewers to move around a video scene as if they were there.

The technology requires massive amounts of computing — every frame of video is 3GB.

All things CES 2017 can be found here ► https://www.cnet.com/ces/
Watch our CES 2017 coverage ► http://cnet.co/2iaIdnA

Subscribe to CNET ► http://cnet.co/2heRhep
Check out our playlists ► http://cnet.co/2g8kcf4
Like us on Facebook: https://www.facebook.com/cnet
Follow us on Twitter: https://www.twitter.com/cnet
Follow us on Instagram: http://bit.ly/2icCYYm
Add us on Snapchat: http://cnet.co/2h4uoK3

source

36 thoughts on “Intel demos world’s first ‘walk-around’ VR video experience

  • January 5, 2017 at 2:05 pm
    Permalink

    Wow, super-awesome! The problem is that with 3GB for each frame… it is almost unusable. Without mentioning the time and the cost needed to record this kind of video…

  • January 5, 2017 at 2:20 pm
    Permalink

    He does not look like he wants to be there he looks like he hates The product

  • January 5, 2017 at 4:20 pm
    Permalink

    Videogrammetry. Wow. I wonder how far (in room-scale or even with teleportation) you could walk into it. If the capture was done with only one camera (or one fixed perspective with multiple lenses anyway), the resolution likely becomes sketchy when you start getting further from the stationary camera's location. Photogrammetry (as in Google Earth VR) uses a series of photos taken from a lot of positions and stitched together to produce 3D geometry and textures. This uses 360 degree binocular video from one position to produce 3D geometry and textures. Amazing stuff though!

  • January 5, 2017 at 4:23 pm
    Permalink

    BS. There was no walking around in the demo. There was peaking around a few objects, which can already be done with an extra long 360 camera and doesn't require 3 gigs per frame. I want to see the mapping

  • January 5, 2017 at 5:38 pm
    Permalink

    soooooo…. whats the difference between this, and what I have been doing with my oculus rift for the past year?

  • January 5, 2017 at 6:06 pm
    Permalink

    it is awesome but i am curious about technical details.How did they achieve this?

  • January 5, 2017 at 8:43 pm
    Permalink

    parsing and parallaxing using card method from multiple sources. nothing new, but certainly novel on such a platform. really cool actually

  • January 6, 2017 at 1:34 am
    Permalink

    How is this 3D video? There's some animation, but the cow close to the camera is just a flat billboard. This is barely different from your regular 3D photo scan.

  • January 6, 2017 at 2:50 pm
    Permalink

    All we need now is to connect a matter manipulation construct containment field then hello Holodeck or matrix what ever you like to call it 🙂

  • January 10, 2017 at 9:01 am
    Permalink

    3GB a second? Yea, wake me up in 10 years.

  • January 11, 2017 at 8:41 pm
    Permalink

    This is very good for virtual travel. There is going to be demand for VR vacations and people will buy VR trips for 10 cents or for 1 USD.

  • January 29, 2017 at 1:51 pm
    Permalink

    everybody is so concerned on the 3 gb/frame thing or the vr tech. but nobody wrote a comment for how they actually captured this in real life… like HOW?!!!!

  • February 17, 2017 at 11:34 pm
    Permalink

    3gig per frame!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    Ok so basically its a glorified tech demo of something we might get 10 years from now, cos it sure as hell won't be available like this any time soon.

  • February 17, 2017 at 11:36 pm
    Permalink

    So if you wanted the basic cinematic experience of 24fps your looking at 72gig per second of video…

  • March 17, 2017 at 1:44 pm
    Permalink

    This is the kind of thing I hope one day they can add to google maps.

  • December 5, 2017 at 10:51 pm
    Permalink

    so it's animated photogrammetry?

  • December 22, 2017 at 2:43 am
    Permalink

    This is how vr porn should be made, not the crap that's out right now.

  • January 9, 2018 at 10:31 pm
    Permalink

    If this is a point cloud based 3D data then lighting shading and GI data can be extracted ,then re-implemented and applied over 3D superimposed data in a real time game engine with full interactivity ?

  • January 10, 2018 at 11:52 am
    Permalink

    Intel has by far the most impressive conference of them all..

    Hats off.

  • May 11, 2018 at 12:41 am
    Permalink

    It remains to be seen if Intel can achieve success in Hollywood. This represents an enormous investment that will require even more time, $$$ and effort to acheive meaningful results.

  • September 12, 2018 at 1:32 am
    Permalink

    Wow! 3GB per frame. No wonder it loops after a pretty short time.

Comments are closed.