IDF Shanghai | So you think parallel programming is a drag, huh? "Let us recompile your app to take advantage of all those cores", says Intel. Plus heaps of other juicy new stuff.
Intel admits programming in parallel is still very, very hard for most programmers.
An Intel spokesman presenting on the topic said that although today’s computers operate in the gigascale range – “Gigahertz frequency, Gigabit transmission speeds, Gigabyte storage capacity”, we will soon be in a “Terascale” era, where everything is 1,000 times that – including the core count.
So, unless programs are completely multithreaded, they simply won’t use the power available in hugely multicore systems.
“We can’t blame the programmers though. The industry has been complaining for 30 years about how difficult parallel programming is.
Intel says it has developed a new programming model that is more natural to the way programmers think by default, but also allows for multithreading.
“Our answer is CT: C stands for C++ based MPI and T stands for high throughput. So programmers can run C++ like scalar code, and our CT code will do everything that an experienced programmer will do like parallelisation and vectorisation.”
Intel has already got the technology working on quad and eight core platforms, and says its existing test applications can run on upcoming terascale platforms without modification of any code.
The closing words were amusing: “If you are a programmer, please do not worry about being fired because you cannot do terascale programming, because CT will blast you into the parallel era.”
Speculative parallel threading
The next presentation was also on helping programmers with multithreading. However, with this technology, rather than programmers having to do any recoding, it’s a new compiler that can take single thread apps and make them work in a multithreaded mode.
The way it works is it analyses applications to see whether a part of the application can be selected and made to run parallelly. If it executes successfully, the software knows it can, and it allows an application to be recompiled with the settings in place for that thread to run in parallel.
Intel explained, “It’s different from CT because CT is a new programming model to write large scale parallel programming, but speculative parallel threading recompiles existing applications.”
Carry small display large – remote graphic rendering
This IDF looks like it will be very much focused on pocket-computing, and that’s a good thing as far as I’m concerned. Intel has been promising a full x86 computer in your pocket for several years now, and the devices have been too chunky to be useful. This is the year that Intel should have some good stuff to show.
The Atom processor, which will be launched officially at this IDF, in the keynote this morning (around midday Australian eastern time) is headed squarely towards small form factor PCs such as the eeePC generation 2.0. However, at 2W for the 1.6GHz Atom CPU and another 2W for its chipset (a total of 4W) it’s still nowhere near power efficient enough for smartphone/pocketable PC type devices. (I don’t know what to call them any more… phones, smartphones, UMPCs, PDAs, pocketable PCs, internet tablets – where does one draw the line these days?)
Whatever pocketable PCs Intel does pull out of its hat tomorrow, one thing is for sure: it is very, very keen for the industry to get on board with its new remote graphics rendering technology. This is essentially an Intel chip that TV, projector and other display manufacturers build into their displays’ backend electronics and allows them to receive wireless images from a pocketable PC.
Intel said it had considered two different models for the wireless image transmission: one where the images instructions would be sent and rendered at the remote device, and another where the images would be rendered on the device and the video stream sent.
It has ultimately settled on a system more like the former – image instructions are sent and then rendered by the Intel chip implanted into the device.
When APC asked why Intel thought it would be successful with this technology, where so many others (including Microsoft, with its networked projector technology) have failed before, the answer was simple: “even though it looks like a software solution, Intel can put all the necessary software on a chip and make it available to manufacturers very cheaply.”
The chips will render graphics using Open-GL, and have the advantage that a lot of the processing demands are taken out of the battery-powered mobile device. As a result, the graphics can be rendered at better quality and at higher resolutions than the puny mobile CPU would be able to cope with.
Next killer apps – “media mining” research
The problem of text search has “been solved”, according to Intel. Few people would disagree – Google has pretty much got that one under control, thanks very much.
However, Intel reckons the big area that is yet to be conquered is video, photo and audio search, and its experience in massively parallel computing tasks is where Intel can really help.
Speaking of massively parallel computing in the graphics space – it ties in rather neatly with Intel’s Larabee many core GPU project, and Intel specifically mentioned this in this context. Clearly, image/video search is going to be one key area for Larabee that will differentiate Intel from the competitors who are still heavily focused on gaming graphics.
Its research labs in China, in collaboration with Tsinghua University, are developing ‘media mining’ software which can sift through masses of photos , video and audio and make sense of it using machine intelligence.
For examples, in photos, the software recognises faces, poses, gestures and expression. In scenic photos, it identifies vegetation, waterscapes, the view type and whether it’s outdoor or indoors.
The video mining software can analyse for logos, vehicles, animals and buildings, among other objects. Home videos can be analysed for people walking, running, standing or sitting, and even analysing audio to pick up speech, laughter, silence and music.
Intel claims 90 per cent recognition accuracy on photos and is working on making it work well personal videos – with accuracy is already up to 70 per cent.
Intel says it hopes to bring the technology to market within two to three years.
Intel says it has analysed previous efforts in facial recognition systems and that it has the best at 90% accuracy. “In 2007, in the computer vision conference, there is a paper report that accuracy in personal photo recognition is 75% -- but our system has much better accuracy.”
Make your own music video
This was an oddball one -- Intel is working on making a music video editor to help people make interesting music videos.
It comes with ‘professional music video templates’ based on real music videos by well known artists. It then searches through your own video library using Intel’s video mining technology to find the video clips that most closely match what’s in the video.
Intel admits it’s not going to turn your family photos into the next Britney Spears mega-hit, but says it can make an impactful video by choosing scenes with the right mix of lighting, action, tonality, number of people and so on.
Intel said it hadn’t decided whether it would turn the research into a product yet, but demos that APC saw looked promising, if only as a demonstration of what parallel computing can do for video.
Building networking into CPUs
About three years ago, Intel started a project called L3NIC, looking at how to improve NIC performance dramatically by placing the NIC closer to the CPU.
Its reasoning was that forcing high performance network through the motherboard bus and main memory could be a significant bottleneck in overall performance.
“If traffic has to go through the motherboard and main memory before applications can parse the data in the CPU, it slows things down,” said a spokesman.
Intel decided to look at a new co-processor and it believes it has the ability to improve application performance by 10 to 100 times. The technology is called “Quickassist”.
“We started this initially as a research project to verify an idea, but we discovered the technology was so cool and there were demands in our customer segments that we are productizing the technology.”
Intel is even considering moving the NIC into the CPU die to shorten the signal length.
Phew – that’s it! If the descriptions here of the research presented by Intel are rather brief, that’s because the presentations themselves were brief. Day 0 of IDF is the day before the main expo gets started, and Intel tries to give journos the widest overview of all the peripheral activities Intel is doing. The next two days, though, will be focused on all of Intel’s big announcements, which we will be covering as they come up!
Dan Warne travelled to IDF Shanghai as a guest of Intel.