A new company, HDlogix, is showing a real-time 2D-to-3D conversion process in hopes of inspiring programmers and service providers to engage in trials as a key step toward determining the market appeal of such capabilities. And sources say Motorola has developed a similar technique that uses advanced algorithms to interpret parameters of 2D digital video frames in ways that place pictorial elements in spatial relationships that mimic the visual effects of frames that are produced by 3D cameras.
Motorola officials decline to talk about their prototype system, but HDlogix is now going public with its endeavors after a long period of development pegged to feedback from content providers and distributors. The fruits of the company’s labors were on display at a recent demo staged for journalists and potential customers in Denver.
The demo featured 3D rendering of live HBO and ESPN programming feeds, offering convincing proof that these processes can be performed on virtually any content anywhere in the distribution chain for viewing on 3D capable HDTV sets. The demo employed the standard digital set-tops used with Comcast’s cable service in Denver.
To the untrained eye the synthetic 3D achieved through HDlogix’ real-time conversion process rivals many of the demos featuring content originated in 3D format on view at various trade shows. 3D video formatted to work within a 6 MHz channel via various stereoscopic techniques has noticeable drawbacks resulting from the tradeoffs associated with reduced resolution of the pictures formatted separately for each eye.
For example, polarization techniques that deliver different color versions of the same picture to each eye lend a flatness or cardboard cutout effect to pictorial elements. Processes that deliver frames for each eye on an alternating basis in conjunction with use of shutter glasses that switch the viewing from one eye to the other in sync with the frame sequence eliminate the flatness effect but produce a discontinuity in the video flow that can be disconcerting in fast-action content such as sports programming.
Surprisingly, neither of these effects was present in the synthetic 3D demo conducted by HDlogix, which uses an alternating frame approach in conjunction with shutter glasses. While the 3D dimensionality was somewhat less dramatic than that of the 3D-originated content shown in other demos, the HDlogix 3D effect was sufficiently apparent to deliver a visual experience that represented a significant departure from 2D viewing.
In fact, says Simon Tidnam, vice president of sales at HDlogix, the HDlogix algorithms could easily be tweaked to create a more intense 3D experience, but the tradeoffs involved would push the experience in the direction of the less-than-satisfactory results attending the other 3D modalities. “The system isn’t tunable on a frame-by-frame or object-by-object basis, but we can certainly turn the knobs to fine tune the way various processes are applied depending on the type of content and the preferences of content suppliers,” Tidman says.
“The way we’ve calibrated these applications for the demo represents our take on what the optimal experience would be,” he adds. “It may not have images leaping off the screen, but it avoids the flatness and motion impairments you might see with other approaches.”
As explained by HDlogix CTO William Gaddy, the company’s ImageIQ3D technology reconstructs the geometry and models the objects within each video frame via three approaches to pictorial analysis. The first – nuclear motion estimation – takes motion estimation to a new level of granularity where the object movement from frame to frame is accurate to one one-hundredth of a pixel as compared to the one half or one quarter pixel accuracy achieved in standard encoding processes.
“A second piece is vanishing-point analysis,” Gaddy says. “If I know where all lines converge I can get detailed information on the geometry of the scene and the placement of objects in that space.”
The third technique has to do with exploiting information produced by the depth of focus attending each pictorial element. “Objects nearby are more focused, while those further away are more blurred, which happens with every camera shot,” he explains. “We estimate the amount of blur in each pixel in each frame to gain more information about object depth, which we then use to create a synthetic left-eye view to match the right-eye view.”
HDlogix has also designed its software platform to serve as a means of overcoming disparities among the different approaches to 3D rendering so that service providers can deliver a unified 3D experience to the subscriber. In other words, Tidman says, all of the different flavors of existing 3D content can be converted for display in any given format in real time.
With anywhere from three to five million HDTV sets supplied by Samsung, Mitsubishi and Hundai now in use in the U.S. and new sets slated for introduction by Sony, LG Electronics, Panasonic and others next year, the opportunity to develop 3D services is fast approaching mass scale. “In our discussions with programmers about what it will take to get 3D off the ground, the number one requirement was the need for a means of converting 2D to 3D content in order to increase the availability of 3D programming,” Tidman says.
“We’ve validated our system with experts to ensure we are meeting the performance parameters programmers are looking for,” he adds. “Now we are in discussions aimed at getting pilot activity off the ground. We need at least one programmer, one MSO and one consumer electronics manufacturer to work with as a first step.” He says a test touching at least 50 to 100 households would help to determine whether the 3D experience from 2D content is appealing to consumers.