TeliaSonera’s commercial 4G network has launched in Scandinavia and uses a Samsung dongle.
4G wireless has rolled out in Scandinavia and is being tested here by Telstra and Optus. David Braue finds out what 4G really is and what we can expect from it.
The recent Australian election debate on fibre versus wireless highlighted some confusion over exactly what fourth-generation (4G) wireless is. With the recent launch of the world’s first 4G wireless broadband service by Scandinavian carrier TeliaSonera, the race is now on towards a new generation of wireless services that could reach 1Gbit/s - and vendors are now working to clearly define what 4G is.
The definition was initially muddied by the vendors’ own self-serving bias. Many initially saw 4G as seamless roaming between 3G mobile and fixed networks based on WiMAX, a high-speed wireless technology that’s best for bringing internet services straight into the home, but carriers - seeking to protect their 3G investments - favoured HSPA (High-Speed Packet Access) services instead. But those services initially struggled to reach 1Mbit/s throughput, and even Telstra’s much-vaunted Next G network only delivers up to 8Mbit/s in practice. Compare that with vividwireless, the Seven Network-owned WiMAX ISP that recently reported average user speeds of 9.53Mbit/s.
Vendors may have pushed WiMAX and HSPA as 4G — or intermediary designations like 3.25G, 3.5G and 3.75G — but they meant little. “The industry is just playing on perceptions,” says Kursten Leins, strategic marketing manager for multimedia with Ericsson. “If they sell it as 4G, it must be better than 3G, right? But it’s important to look at what any technology is actually doing at that point in time. You’ve got a wide spread of HSPA user experiences and peak or actual network throughputs.”
Early confusion is dissipating, however, after the Next Generation Mobile Networks (NGMN) program, whose membership includes the world’s largest telcos, set out minimum 4G criteria including quality-of-service (QoS) capabilities, efficient use of radio frequency spectrum, low latency, compatibility with HSPA roadmaps, and clarity of intellectual property rights.
Current 4G targets include a minimum transfer speed of 100Mbit/s, which is finally achievable with the 3GPP LTE (Long Term Evolution) technology deployed by TeliaSonera. As of mid-2010, the Global Mobile Suppliers Association noted 80 firm LTE deployments in 33 countries, with 30 more networks in the planning stages; Telstra and Optus are trialling the technology here - Telstra reported pushing data at 100Mbit/s over a 75km link - but neither has committed to a rollout.
LTE is an upgrade to current HSPA-toting 3G networks, and has delivered up to 160Mbit/s in testing. In the longer term, LTE will segue into the LTE Advanced specification, which promises blistering speeds of up to 1Gbit/s that will wirelessly connect company offices without any reduction in speed. Thanks to co-operation between 3GPP (the peak industry body for GSM, WCDMA and HSPA development) and 3GPP2 (which develops CDMA), networks running both can move straight to LTE.
BETTER LTE THAN NEVER
Given market confusion over what ‘4G’ means, it’s a relief that the industry finally has a hard target. But what exactly is LTE, and how does it work?
Like 802.11a/b/g Wi-Fi, digital TV’s DVB, digital radio’s DAB and other standards, LTE uses a technique called orthogonal frequency division multiplexing (OFDM) to carry data on the downlink; Single-Carrier FDMA (SC-FDMA), is used for the uplink.
To understand how OFDM works, picture a biscuit factory: biscuits are made, one after another, from dough squirted onto conveyor belts at equal intervals. If you want to make more biscuits at a time, you add more conveyor belts. Your total capacity is limited by the width of your factory floor.
Think of the biscuits as bits of data, which in OFDM are carried between the LTE base station and your smartphone using ‘tones’ or subcarriers - individual radio frequency bands each just 15kHz in width. Each band is broken up into seven OFDM ‘symbols’ that together last 0.5 milliseconds. In other words, each conveyor belt carries seven biscuits past a given point every 0.5ms.
Now imagine that you decide to increase biscuit-making capacity by stacking 2, 4, or 6 biscuits on top of each other. LTE uses ‘modulation schemes’ that allow each symbol to represent 2 (QPSK modulation), 4 (16QAM modulation) or 6 bits (64QAM modulation). This means LTE can transmit 28 to 84 bits per millisecond, per subcarrier.
When you connect to an LTE base station, your device uses ‘resource blocks’ that tie together 12 subcarriers - consuming 180kHz of radio spectrum in total - to carry up to 1,008 bits per millisecond, or just over 1Mbit/s. Combine 96 subcarriers to carry eight resource blocks at once, and you’ve got an 8Mbit/s wireless connection - but you’ll be tying up 1.44MHz of radio frequency spectrum (data is multiplexed so a subcarrier can carry traffic to and from multiple devices at once).
LTE can adapt to different amounts of spectrum including 1.4MHz, 3MHz, 5MHz, 10MHz, 15MHz and 20MHz. Note that at 20MHz, which equates to 111 resource blocks, you’re getting over 100Mbit/s bandwidth. But spectrum is a shared resource, so the more users at once, the less bandwidth available to each.
LTE is quite flexible. For example, it offers eight quality-of-service (QoS) levels for guaranteed service quality. It also supports both frequency division duplexing (FDD) -‘paired frequencies’, in which data is downloaded using one subchannel and uploaded using another - and time division duplexing (TDD), a slower and less power-efficient ‘unpaired’ technique, in which data is downloaded over a subchannel a certain portion of the time. FDD is the reason wireless broadband services are so asymmetric but it’s more spectrum-efficient than TDD. Carriers balance the two to maximise bandwidth to the largest number of simultaneous users possible.
A NEW-GENERATION ARCHITECTURE
As well as carrying data, LTE uses an IP-based architecture built around a simplified network architecture called SAE (System Architecture Evolution).
Existing networks tend to use a hub-and-spoke design, with base stations connecting to central controlling nodes that use SS7 or SIGTRAN protocols to manage handoff of calls and data sessions between network control nodes.
LTE replaces older signalling with the 3GPP-favoured Diameter protocol (bit.ly/bsJELJ), which authenticates and authorises sessions as they’re handed between base stations. Since the SEA structure is flat, each base station can talk to the other base stations, creating a self-configuring environment that obviates the need for a separate control node. Since any base station is only one hop away from the internet LTE latency can be less than 10ms, compared with hundreds of milliseconds in 3G — helping applications run faster.
BRINGING 4G TO LIFE
LTE is gaining momentum, but some call it 3.9G in anticipation of LTE Advanced, a successor specification that will push bandwidth up to 1Gbit/s - making it the first standard that can be rightfully designated as 4G.
LTE Advanced will require large amounts of contiguous radio frequency spectrum in low-frequency bands that provide long-distance, high-bandwidth transmission. The best target is the 700MHz range, which won’t be opened up until after the last analogue TV goes black at the end of 2013.
Government policymakers are considering the best policy for managing that bandwidth, but the process is complicated by the need to ‘stack’ existing allocated frequencies together. In late June, the government announced it will provide 126MHz of usable spectrum in the 700MHz band - between 694MHz and 820MHz - with spectrum auctions to run from late 2012. A decision on 2.5GHz spectrum, which can also be used for LTE, is expected soon; together, these decisions will pave the way towards a clearer, faster 4G future