Features

Mobile advertising is fast becoming one of the most profitable sub-sectors in the telecoms industry. Fuelling this growth is the popularity of advanced mobile services and the willingness of consumers to receive advertising that appeals to their tastes. David Knox examines the potential of this new marketing
initiative and how charging and monitoring solutions can help mobile operators provide flexible and profitable ad campaigns while enhancing the consumer’s user experience

ubiquitous technology in the world, it was only a matter of time before advertisers caught on to the idea that a highly effective new way to reach a mass audience was not by radio, television or newspapers, but through a wireless handset. Mobile advertising has definitely become the latest 'it' word in the telecoms industry, with analysts predicting billions of pounds in revenue over the next five years. According to one recent report published by Informa, the mobile advertising market is going to be worth $871m this year, and will jump to $11.35bn in 2011.

Big in Japan
One of the countries to embrace the concept of wireless advertising is Japan, which has the second largest advertising market in the world behind the US and the first country to exceed 50 per cent 3G penetration. According to some estimates, mobile advertising revenues for 2006 in Japan are expected to exceed $300 million and to double that figure by 2009. This is higher than any other country in the world. Almost 60 per cent of Japanese consumers already use mobile coupons and discounts more than once a month.
Following in the footsteps of Japan is the wireless industry in Western Europe, where the idea of mobile advertising is also being explored by some of most pioneering mobile operators around, such as  Hutchison's 3G mobile group, 3. The operator is currently subsidising usage and phones through advertising on the phone. These models are also being offered through downloads, subscriptions, and video streams. It has also supported various mobile advertising campaigns in exchange for free content such as the launch of the first ever first movie video ad in Europe on behalf of Redbus, the film distributor, for the movie 'It's All Gone Pete Tong'. In partnership with 3, a banner on the carrier's wireless portal home page links to a microsite with more information on the film. At that site, subscribers have already requested more than 100,000 downloads of the movie trailer.
What is astounding about the access rate of the   aforementioned movie clip is that it shows how the average response for mobile advertising is up to 10 times greater than Internet response rates. This is due to the fact that there are only a handful of links to choose from on a typical mobile web page. If one of them is an ad, it is more likely to get clicked than the same ad on the Internet.
Tackling the mobile ad market on an even bigger scale is wireless heavyweight Vodafone, who recently announced a strategic alliance with search engine Yahoo! to create an innovative mobile advertising business as a means to inject new revenue streams for both companies.
Through this partnership, Yahoo! will become Vodafone's exclusive display advertising partner in the UK. Yahoo! will use the latest technology to provide a variety of mobile advertising formats across Vodafone's content services. The initiative will be rolled out in the UK in the first half of 2007.
Under the plans, customers who agree to accept carefully targeted display advertisements will qualify for savings on certain Vodafone services. In other words, Vodafone subscribers would pay a lower rate for data services if they were prepared to accept ad banners from Yahoo! when signing up with the mobile provider.
This promotional deal would also extend to key Vodafone mobile assets including the Vodafone live! portal games, television and picture messaging services.
Vodafone and Yahoo!'s approach to tackling the mobile ad sector is a strategic one. It's all about understanding what each user wants from their mobile phone and providing a truly unique individual advertising experience. In a way, the mobile handset will be transformed into a wireless 'magazine' filled with all sorts of adverts, only it will be slicker – since the advertisements will be live, potentially interactive, and most importantly, targeting the tastes of the mobile user. It can also be based on user location at the time.
Matching user interests
This brings us to the subject of profiling, which is indisputably the most important aspect of mobile marketing and one of the key ingredients to success. In order for Vodafone, 3, or any other mobile operator to make any money, they must know the interests of their users so that the advertising campaigns directed their way actually appeal to them.
This is where charging and rating applications can help operators conduct effective ad campaigns. These solutions are all about control – allowing operators to implement sophisticated, value-based charging strategies that help to differentiate between innovative packages, bundles and promotions. Subsequently, VoluBill's 'Charge it' real-time data control and charging solution, for example, could be installed in a wireless network to detect subscribers and, thereafter, display or redirect them to advertisements that match their user profile if they have chosen to receive advertising.
Information about the brand tastes and interests of the wireless user would be gathered by the mobile carrier before the user agrees to subscribe to advertising. This information would then be stored in the network and accessible to the Charge it solution to ensure that advertising is targeting the right audience.  So, for example, if one mobile user is identified as a die-hard fashionista who loves to read Vogue magazine each month, then Charge it could access the profile information in-real time and automatically redirect the consumer to an advert for the film 'Devil Wears Prada' when they next establish a data connection via the handset. If the consumer has previously received the ad for the film and has responded to the link, then Charge it can also identify that the user has previously accessed the advert and block it from being sent again to the consumer. This helps ensure that the target audience doesn't get put off by seeing advertisements they have already responded to.
This method of monitoring can also be used to track the success of a particular campaign and the revenue share that is generated as a result. If every single fashionista that signed up for mobile advertising clicked on the film link for the 'Devil Wears Prada' advert, for example, and also agreed to receive a free copy of the book which was simultaneously promoted in the movie advert, then Charge it would be able to provide the operator details on how many and which subscribers saw the ad, and how many of them bought the book. Most importantly, Charge it could calculate the income generated from the advert – which would be divided up by so many parties, including the operator and its service providers, the advertising agency, the film distributor, the book publisher and, last but not least, the writer.

Extensive advantages
Indeed, what solutions like Charge it provide are extensive advantages to operators both in charging and managing the user experience, whether subscribers are surfing the Internet or downloading content-rich premium services. As mentioned before, charging and monitoring solutions are all about controlling the mobile advertising medium and ensuring its success.
Most, if not all, of the mobile content and adverts provided on the handset would be managed by search engine platforms such as from Google or, in the case of Vodafone, Yahoo!. As such, charging and monitoring solutions providers can provide the additional user profiling, traffic redirection and charging functionality required to be able to offer a complete mobile advertising and charging solution.
Apart from accessing profile information, Charge it could also be configured to respond to other factors that would enable operators to redirect a subscriber's wireless Internet session to a specific advertising page. One way would be through location identification. If a user went online to a particular website that didn't match their profile, for example, like a fashionista accessing rugby match details, then Charge it could be configured to track and monitor this activity and record it for future marketing purposes. The next time an advertising campaign was launched concerning the sport, the aforementioned user would be sent the advertisement during their next Internet session. Again, this data would be monitored in order to enable Charge it to monitor the adverts and to redirect or block the information when necessary. 
If users accepted the ads sent to them, then Charge it could also be used to help operators to offer flexible price plans. This could include charging customers who access ads at a cheaper rate for services, or granting them a free Internet usage allowance per day. Consumers could also potentially receive periodic adverts like commercials on TV. The more frequent the adverts, the less they would pay for services.

Acceptance
Although widely ignored for many years as a possible income spinner, mobile advertising has finally been accepted by the business world as an innovation that can combine the wide reach of television with the precision of direct marketing and the tracking potential of the Internet. All of this adds up to serious revenue potential. Mobile advertisements also benefit all the participants involved in its new marketing game. While carriers get to boost their data and content revenue streams, advertisers gain a new and effective way of targeting a mass consumer audience. Meanwhile, mobile users get to benefit from great promotions and offers for the products and services they want to hear about, as well as cheaper mobile Internet usage.
Before mobile advertising and marketing can reach its full potential, however, certain technical and business requirements need to be met. This includes putting in place the right charging and monitoring infrastructure, as well as building relationships with quality advertising partners. Once installed, and with the right commercial models in place, there is nothing stopping mobile advertising from possibly becoming the most successful sub-sector in global telecoms history.                   

David Knox is Product Marketing Director at VoluBill

Technology companies come and go, but some are blessed with the foresight to help drive the technological developments that permeate all our lives. One such company is Micron, whose COO, Mark Durcan, tells Lynd Morley why it has been so successful

Future gazers abound in our industry, and we're being promised a near-future of sensor networks and RFID tags that will control or facilitate everything from ordering the groceries, to personalised news projected into our homes or from our mobile phones. This stuff of science fiction, fast becoming science fact, is the visible, sexy end-result of the technology, but what about the guys working at the coal-face, actually producing the tools that enable the dreams to come true?
Micron Technology is one of the prime forces at that leading edge. Among the world's leading providers of advanced semiconductor solutions, Micron manufactures and markets DRAMs, NAND Flash memory, and CMOS image sensors, among other semiconductor components and memory modules for use in computing, consumer, networking and mobile products. And Mark Durcan, Micron's Chief Operating Officer, is confident that the company has been instrumental in helping the gradual realisation of the future gazers' predictions.
“I do think that we are, in many ways, creating the trends, because we've created the technology which enables them,” he comments. “I can give you two prime examples. The first is in the imaging space where, for many decades, charge-coupled devices (CCDs) were the technology of choice for capturing electronic images – mostly because the image quality associated with CCDs was much better than that of CMOS images, which is what Micron builds today. 
“Nonetheless, we were strong believers that we could marry very advanced process technology, device design and circuit design techniques with the CMOS imager technology, and really create a platform that enabled a whole new range of applications. 
“I think we did that successfully,” he continues, “and the types of applications that were then enabled are really quite stunning. For instance, with CCDs you have to read all the bits out serially, so you can't capture images very quickly. With CMOS imagers you can catch thousands of images per second, which then opens the door to a whole new swathe of applications for the imagers – from very high speed cameras, to electronic shutters that allow you to capture a lot of images, and, by the way, you can do it using far less power. We have already made a major impact in providing image sensors to the notoriously power hungry cameraphone and mobile device based marketplaces, and in the space of two years have become the leading supplier of imaging solutions there. One in three cameraphones now have our sensors and in only two years we have become the largest manufacturer of image sensors in unit terms worldwide. So now, for instance, the technology enables all sorts of security, medical, notebook and automotive applications – you can tune the imagers for a very high dynamic range, low light and low noise at high temperatures which then enables them to operate in a wide variety of environments that CCDs can't function in.
As a result, you can put imaging into a multitude of applications that were never possible before, and I think we really created that movement by creating the high quality sensors that drive those applications.”
The second example Durcan quotes is in the NAND memory arena. “What we've done is probably not   apparent to everyone just yet, but, actually, I believe that we've broken Moore's law.
“We are now scaling in the NAND arena much faster than is assumed under Moore's law, and that has really changed the rate at which incremental memory can be used in different and new ways. As a result, I believe it will also pretty quickly change the way computers are architected with respect to memory distribution. So we're going to start seeing changes in what types of memory are used, and location in the memory system, and it's all being driven by a huge productivity growth, associated with NAND flash and the rate at which we're scaling it. We are scaling it faster than anyone else in the world now and we are also well tuned to the increasingly pushy demands of mobile communications, computing and image capture devices.“
The productivity growth Durcan alludes to has been particularly sharp for Micron over the past year. The formation of IM Flash – a joint venture with Intel – in January 2006 has seen the companies bringing online a state-of-the-art 300mm NAND fabrication facility in Virginia, while another 300mm facility in Utah is on track to be in production early next year. The venture also produces NAND through existing capacity at Micron's Idaho fabrication facility. And just to keep things even busier, the partners introduced last July the industry's first NAND flash memory samples built on 50 nanometre process technology. Both companies are now sampling 4 gigabit 50nm devices, with plans to produce a range of products, including multi-level cell NAND technology, starting next year. At the same time, Intel and Micron announced in November 2006 their intention to form of a new joint venture in Singapore (where Micron has a long history of conducting business) that will add a fourth fabrication facility to their NAND manufacturing capability.
In June 2006, Micron also announced the completion of a merger transaction with memory card maker Lexar Media, a move that helped Micron expand from its existing business base into consumer products aimed at digital cameras, mobile computing and MP3 or portable video playing devices.
“Our merger with Lexar is interesting for a number of different reasons,” Durcan comments. “Certainly it brings us closer to the consumer, as, historically, our products tended to be sold through OEMs. But, in addition, it provides the ability to build much more of a memory system, as opposed to stand-alone products, given that Lexar delivers not only NAND memory, but also a NAND controller that manipulates the data in different ways and puts it in the right format for the system that you're entering. Working closely with Lexar, we want to ensure that this controller functionality is tied to the new technologies we want to adopt on the NAND front, making sure that they work well together, thus enabling more rapid introduction of new technologies and getting them to market more quickly.”
The considerable activity of the past twelve months clearly reflect Micron's view of itself as a company that is in the business of capturing, moving and storing data, and aiming for the top of the tree in each section.   On the 'capturing' front, for instance, Durcan notes: “We've been very successful from a technology development perspective, and I think we're pretty much the unquestioned leader in the image quality and imaging technology arena. As mentioned we also happen to be the world's biggest imaging company now – it happened more quickly than any of us thought it would, but it was driven by great technology. So we have plenty of challenges now in making sure that we optimise the opportunity we've created to develop new and more diversified applications.”

Stringent tests
Certainly, the company is willing to put its developments to the most stringent of tests. All of Micron's senior executives, including Durcan, recently drove four Micron off-road vehicles in an exceptionally rugged all-terrain race in California, the Baja 1000, digitally capturing and storing more than 140 hours of video from the race, using Micron's DigitalClarity image sensors and Lexar Professional CompactFlash memory cards specially outfitted for its vehicles. All the technology performed remarkably well, as did Micron's CEO Steve Appleton, who won the contest's Wide Open Baja Challenge class some 30 minutes ahead of the next closest competitor.
Appleton's energetic and non-risk-averse approach to both the Baja 1000 (in some ways the American version of the Paris Dakar Rally) and to life in general (he is reputed to have once crashed a plane during a stunt flight, but still proceeded with a keynote speech just a few days later) is reflected in an undoubted lack of stuffiness within Micron.
Certainly, the company has taken a certain level of risk in pioneering technology developments. RFID is a case in point. “Sometimes,” Durcan explains, “the technology was there, but the market was slow to develop. RFID is a good example. Today, Micron has the largest RFID patent portfolio in the world. We certainly developed a lot of the technology that is now incorporated in global RFID standards, but when we first developed it, the threat of terrorism, for instance, was less obvious, so we simply couldn't get these tags going that are now absolutely commonplace. I suppose you could say we've been a little ahead of our time.”
The company is also managed by a comparatively young executive team, with a very non-hierarchical approach to business. “I do believe that we have a certain mindset that keeps us pretty flexible,” Durcan explains, “and one our strongest cards is that we have some really great people, with a great work ethic. At the same time, we drive a lot of decisions down into the company. We're probably less structured in our decision making than a lot of companies. 
“So, we try to get the right people in the room (not necessarily in the room actually, but on the same phone line!) to make a decision about what is the right space to operate in, then we can turn it over to people who can work the details.
“We try to get to that right space, at a high level, through good communication and then drive it down. It is the opposite of what I believe can happen when companies grow, become compartmentalised, and tend to get more and more siloed.
“There is also very strong synergy between the different activities within Micron,” he continues. “In each case we're really leveraging advanced process technology, advanced testing technology, and large capital investments in large markets. There are a lot of things that are similar and they do all play closely with each other.”

International bunch
Micron's people are, in fact, a truly international bunch, recruited globally, and bringing a great diversity of skills and approaches to the company. “I think that we are one of the most global semiconductor companies in the world,” Durcan says, “despite being a relatively young company. We recently started manufacturing our sensors in Italy and have design centres in Europe, both in the UK and Norway, which are expanding their operations. In fact we are now manufacturing on most continents – except in Africa and Antartica – and we have design teams right around the world who work on a continuous 24hr cycle handing designs from site to site. We've tried to grow a team that is very diverse, and leverage the whole globe as a source of locating the best talent we can.”
So, does all this talent produce its own crop of future gazers? Durcan believes they have their fair share.  “There certainly are people at Micron who are very good at seeing future applications. My personal capabilities are much more at the technology front end. I can see it in terms of 'we can take this crummy technology and really make it great'. Then I go out and talk to other people in the company who say 'that's fantastic, if we can do that, then we can...'. It really does take a marriage of the whole company, and a lot of intellectual horsepower.”
That horsepower has resulted in a remarkable number of patents for Micron. Durcan comments: “The volume and quality of new, innovative technology that Micron has been creating is captured by our patent portfolio.  It's an amazing story, and something I'm really proud of.  The point is, Micron is a pretty good-sized company, but we're not large by global standards – we're roughly 23,500 employees worldwide. Yet we are consistently in the top five patent issuers in the US.
“I feel the more important part of the patent story, however, is that when people go out and look at the quality of patent portfolios, they typically rank Micron as the highest quality patent portfolio in the world – bar none. I think that's pretty impressive and speaks volumes about the quality our customers benefit from.”

Lynd Morley is editor of European Communications

Technology companies come and go, but some are blessed with the foresight to help drive the technological developments that permeate all our lives. One such company is Micron, whose COO, Mark Durcan, tells Lynd Morley why it has been so successful

Lead interview – It's a vision thing

Future gazers abound in our industry, and we’re being promised a near-future of sensor networks and RFID tags that will control or facilitate everything from ordering the groceries, to personalised news projected into our homes or from our mobile phones. This stuff of science fiction, fast becoming science fact, is the visible, sexy end-result of the technology, but what about the guys working at the coal-face, actually producing the tools that enable the dreams to come true?
Micron Technology is one of the prime forces at that leading edge. Among the world’s leading providers of advanced semiconductor solutions, Micron manufactures and markets DRAMs, NAND Flash memory, and CMOS image sensors, among other semiconductor components and memory modules for use in computing, consumer, networking and mobile products. And Mark Durcan, Micron’s Chief Operating Officer, is confident that the company has been instrumental in helping the gradual realisation of the future gazers’ predictions.
“I do think that we are, in many ways, creating the trends, because we’ve created the technology which enables them,” he comments. “I can give you two prime examples. The first is in the imaging space where, for many decades, charge-coupled devices (CCDs) were the technology of choice for capturing electronic images – mostly because the image quality associated with CCDs was much better than that of CMOS images, which is what Micron builds today. 
“Nonetheless, we were strong believers that we could marry very advanced process technology, device design and circuit design techniques with the CMOS imager technology, and really create a platform that enabled a whole new range of applications. 
“I think we did that successfully,” he continues, “and the types of applications that were then enabled are really quite stunning. For instance, with CCDs you have to read all the bits out serially, so you can’t capture images very quickly. With CMOS imagers you can catch thousands of images per second, which then opens the door to a whole new swathe of applications for the imagers – from very high speed cameras, to electronic shutters that allow you to capture a lot of images, and, by the way, you can do it using far less power. We have already made a major impact in providing image sensors to the notoriously power hungry cameraphone and mobile device based marketplaces, and in the space of two years have become the leading supplier of imaging solutions there. One in three cameraphones now have our sensors and in only two years we have become the largest manufacturer of image sensors in unit terms worldwide. So now, for instance, the technology enables all sorts of security, medical, notebook and automotive applications – you can tune the imagers for a very high dynamic range, low light and low noise at high temperatures which then enables them to operate in a wide variety of environments that CCDs can’t function in.
As a result, you can put imaging into a multitude of applications that were never possible before, and I think we really created that movement by creating the high quality sensors that drive those applications.”
The second example Durcan quotes is in the NAND memory arena. “What we’ve done is probably not   apparent to everyone just yet, but, actually, I believe that we’ve broken Moore’s law.
“We are now scaling in the NAND arena much faster than is assumed under Moore’s law, and that has really changed the rate at which incremental memory can be used in different and new ways. As a result, I believe it will also pretty quickly change the way computers are architected with respect to memory distribution. So we’re going to start seeing changes in what types of memory are used, and location in the memory system, and it’s all being driven by a huge productivity growth, associated with NAND flash and the rate at which we’re scaling it. We are scaling it faster than anyone else in the world now and we are also well tuned to the increasingly pushy demands of mobile communications, computing and image capture devices.“
The productivity growth Durcan alludes to has been particularly sharp for Micron over the past year. The formation of IM Flash – a joint venture with Intel – in January 2006 has seen the companies bringing online a state-of-the-art 300mm NAND fabrication facility in Virginia, while another 300mm facility in Utah is on track to be in production early next year. The venture also produces NAND through existing capacity at Micron’s Idaho fabrication facility. And just to keep things even busier, the partners introduced last July the industry’s first NAND flash memory samples built on 50 nanometre process technology. Both companies are now sampling 4 gigabit 50nm devices, with plans to produce a range of products, including multi-level cell NAND technology, starting next year. At the same time, Intel and Micron announced in November 2006 their intention to form of a new joint venture in Singapore (where Micron has a long history of conducting business) that will add a fourth fabrication facility to their NAND manufacturing capability.
In June 2006, Micron also announced the completion of a merger transaction with memory card maker Lexar Media, a move that helped Micron expand from its existing business base into consumer products aimed at digital cameras, mobile computing and MP3 or portable video playing devices.
“Our merger with Lexar is interesting for a number of different reasons,” Durcan comments. “Certainly it brings us closer to the consumer, as, historically, our products tended to be sold through OEMs. But, in addition, it provides the ability to build much more of a memory system, as opposed to stand-alone products, given that Lexar delivers not only NAND memory, but also a NAND controller that manipulates the data in different ways and puts it in the right format for the system that you’re entering. Working closely with Lexar, we want to ensure that this controller functionality is tied to the new technologies we want to adopt on the NAND front, making sure that they work well together, thus enabling more rapid introduction of new technologies and getting them to market more quickly.”
The considerable activity of the past twelve months clearly reflect Micron’s view of itself as a company that is in the business of capturing, moving and storing data, and aiming for the top of the tree in each section.   On the ‘capturing’ front, for instance, Durcan notes: “We’ve been very successful from a technology development perspective, and I think we’re pretty much the unquestioned leader in the image quality and imaging technology arena. As mentioned we also happen to be the world’s biggest imaging company now – it happened more quickly than any of us thought it would, but it was driven by great technology. So we have plenty of challenges now in making sure that we optimise the opportunity we’ve created to develop new and more diversified applications.”

Stringent tests
Certainly, the company is willing to put its developments to the most stringent of tests. All of Micron’s senior executives, including Durcan, recently drove four Micron off-road vehicles in an exceptionally rugged all-terrain race in California, the Baja 1000, digitally capturing and storing more than 140 hours of video from the race, using Micron’s DigitalClarity image sensors and Lexar Professional CompactFlash memory cards specially outfitted for its vehicles. All the technology performed remarkably well, as did Micron’s CEO Steve Appleton, who won the contest’s Wide Open Baja Challenge class some 30 minutes ahead of the next closest competitor.
Appleton’s energetic and non-risk-averse approach to both the Baja 1000 (in some ways the American version of the Paris Dakar Rally) and to life in general (he is reputed to have once crashed a plane during a stunt flight, but still proceeded with a keynote speech just a few days later) is reflected in an undoubted lack of stuffiness within Micron.
Certainly, the company has taken a certain level of risk in pioneering technology developments. RFID is a case in point. “Sometimes,” Durcan explains, “the technology was there, but the market was slow to develop. RFID is a good example. Today, Micron has the largest RFID patent portfolio in the world. We certainly developed a lot of the technology that is now incorporated in global RFID standards, but when we first developed it, the threat of terrorism, for instance, was less obvious, so we simply couldn’t get these tags going that are now absolutely commonplace. I suppose you could say we’ve been a little ahead of our time.”
The company is also managed by a comparatively young executive team, with a very non-hierarchical approach to business. “I do believe that we have a certain mindset that keeps us pretty flexible,” Durcan explains, “and one our strongest cards is that we have some really great people, with a great work ethic. At the same time, we drive a lot of decisions down into the company. We’re probably less structured in our decision making than a lot of companies. 
“So, we try to get the right people in the room (not necessarily in the room actually, but on the same phone line!) to make a decision about what is the right space to operate in, then we can turn it over to people who can work the details.
“We try to get to that right space, at a high level, through good communication and then drive it down. It is the opposite of what I believe can happen when companies grow, become compartmentalised, and tend to get more and more siloed.
“There is also very strong synergy between the different activities within Micron,” he continues. “In each case we’re really leveraging advanced process technology, advanced testing technology, and large capital investments in large markets. There are a lot of things that are similar and they do all play closely with each other.”

International bunch
Micron’s people are, in fact, a truly international bunch, recruited globally, and bringing a great diversity of skills and approaches to the company. “I think that we are one of the most global semiconductor companies in the world,” Durcan says, “despite being a relatively young company. We recently started manufacturing our sensors in Italy and have design centres in Europe, both in the UK and Norway, which are expanding their operations. In fact we are now manufacturing on most continents – except in Africa and Antartica – and we have design teams right around the world who work on a continuous 24hr cycle handing designs from site to site. We’ve tried to grow a team that is very diverse, and leverage the whole globe as a source of locating the best talent we can.”
So, does all this talent produce its own crop of future gazers? Durcan believes they have their fair share.  “There certainly are people at Micron who are very good at seeing future applications. My personal capabilities are much more at the technology front end. I can see it in terms of ‘we can take this crummy technology and really make it great’. Then I go out and talk to other people in the company who say ‘that’s fantastic, if we can do that, then we can...’. It really does take a marriage of the whole company, and a lot of intellectual horsepower.”
That horsepower has resulted in a remarkable number of patents for Micron. Durcan comments: “The volume and quality of new, innovative technology that Micron has been creating is captured by our patent portfolio.  It’s an amazing story, and something I’m really proud of.  The point is, Micron is a pretty good-sized company, but we’re not large by global standards – we’re roughly 23,500 employees worldwide. Yet we are consistently in the top five patent issuers in the US.
“I feel the more important part of the patent story, however, is that when people go out and look at the quality of patent portfolios, they typically rank Micron as the highest quality patent portfolio in the world – bar none. I think that’s pretty impressive and speaks volumes about the quality our customers benefit from.”

Lynd Morley is editor of European Communications

So, what’s the correlation between Charles Darwin’s theory of evolution and base station antennas, you may ask. Peter Kenington makes the connection...

In 1859, when Charles Darwin first published “On the Origin of Species...”, there is little doubt that he did not have wireless base-stations in mind. In reality, however, many of his ideas are equally applicable to this area of evolution, as they are to evolution in the natural world. One of the key elements that have enabled this to happen is the now widely accepted set of open specifications, set out by the Open Base Station Architecture Initiative (OBSAI), covering the interfaces between the main modules within a base station.
Darwin's key insight was in noticing that, in the natural world, the survival of a species is based upon its ability to adapt to environmental change and to competition from rival species that are also evolving. The key to survival is in finding a niche, be it large or small, within the ecosystem of our planet. Niches exist at all levels within the food chain, from that of a simple, low functionality existence (e.g. single-celled creatures, bacteria etc.) through to something of much higher 'performance' (e.g. tigers, dolphins and man). The same situation exists in the base-station arena, where simple, low cost pico and femto BTSs are beginning to emerge to fulfil low cost, short-range coverage requirements. These complement larger, more sophisticated designs for high-capacity, multi-carrier macro cells. In both cases, the key to survival rests in    achieving acceptable or superior levels of performance, for the lowest possible ownership cost.
The natural world and the world of base-stations have one major difference from an evolutionary perspective, and that is that base-stations can now immediately take advantage of the latest innovation (in RF or baseband) through the designing in or substitution of a new RF, baseband, transport, clock/control or power supply module. The internal interface specifications, pioneered by OBSAI, enable this process for a BTS (Base-station Transceiver System). In the case of the natural world, however, the adoption of a new form of jaw, leg muscle etc. from another species would either take very many generations or prove to be impossible. This is, in many respects, how things used to be in the base-station world: if a particular OEM included a useful innovation in its new generation BTS product, at best, this would be copied by its rivals in a subsequent generation of BTS. 
The adoption of the OBSAI standards represents a significant short-cut in this process, since innovations which are included in module products placed on the open market, can become a part of many manufacturers' products quite quickly. This potentially reduces the hardware development burden within the OEMs and allows them greater freedom to concentrate on developing aspects of their products, which will provide true differentiation from their competitors. Many of these areas fall within the domain of software and this will increasingly dominate in future BTS generations. It is this area more than most that will lead to the survival of the 'fittest' – which in a BTS context translates to the most innovative.
The OBSAI organisation consists of more than 130 component, module and base station vendors. Its aim is to create an open set of specifications for the internal modules required in a base-station, encompassing both interface and mechanical aspects. A good analogy in the PC industry is that of the PCI bus/card specifications. Many more unit-level options will become available in the future, as the technology develops to integrate modules into combined units. This will open up new location possibilities for base-stations due to the availability of smaller and more versatile architectures that are easier to site.

Revolution in evolution
The last 20 years or so has seen a revolution in the PC industry, in cost, yes, but also in capability and this has been brought about, in a large part, by the PCI bus. Most PC manufacturers offer a huge range of models, but these generally fall into a very small number of 'families' – often just one. PC manufacturers have achieved the seemingly impossible, in being able to offer a huge range of choice to their customers whilst employing a modest cost base for their organization.  This has been achieved through minimizing inventory and design engineering services, through the use of standard packaging and 'modules' (e.g. graphics cards, DVD-drives, memory cards, motherboards etc.). These modules can then be selected appropriately to generate a large product offering appropriate for all tastes and budgets. Their success rests, in a large part, on them being able to provide the customer with exactly what he or she wants. This is a level of service that it has not, in the past, been economic to provide in the base-station area.

Changed landscape
OBSAI's announcement, in June 2006, that it had released a full set of interface, hardware and test specifications for the internal interfaces within a base-station has changed the landscape for the mobile radio base-station. OBSAI's specifications are compatible with all of the major current and emerging air interface standards, including GSM, GSM/EDGE, WCDMA, CDMA2000 and WiMAX and are available for public download free of charge (from www.obsai.com). These specifications allow module vendors to manufacture modules that are capable of operating in any OBSAI-compatible BTS, thereby reducing substantially the development effort and costs involved in the introduction of a new range of BTS products. They also enable a more PC-like model to be adopted in the design and construction of a BTS product – i.e. the selection of modules from a range of vendors at a range of capability levels and costs, such that the overall BTS closely matches the operator's requirements. Embracing this model will be a route, and a key, to survival in the emerging BTS marketplace.

A head in the clouds
The changes in the market landscape, in terms of planning restrictions, health concerns, acoustic noise objections (from cooling fan noise) and many other issues are making it increasingly difficult for operators to erect new cell sites. New BTS architectures are therefore emerging to try and address these problems; here again, the survivors will be those that adopt these new architectures and can make them work for their customers.
These issues have given birth to the remote RF head – a new form of BTS deployment that is fully supported by the OBSAI specifications. This architecture places the active RF electronics remotely from the rest of the BTS and its associated backhaul. The remote RF head itself houses all of the radio-related functions (transmitter RF, receiver RF, filtering etc.). This is then connected to the remainder of the BTS via fibre-optic cable.
The above arrangement allows main elements of the BTS (the digital and network interface modules) to be housed in a low-cost internal location, such as a basement. The RF head can then be situated on the roof of the building or on an outside wall. Another option is to site the remote RF head at the top of an antenna mast, with the remainder of the BTS being located at the base of the mast in a suitable hut or other enclosure.

Cheap hotels bring comfort
This principle can be extended to multiple remote RF heads, whilst still maintaining a single, central, location for the other aspects of their associated base-stations. This concept is usually referred to as a BTS hotel. The remote RF heads themselves can be located a substantial distance from the main BTS hotel site, due to the very low losses associated with the fibre optic cables used to connect them to the remainder of the BTS.
One of the main advantages of the BTS hotel architecture lies in its ability to provide cost-effective BTS redundancy. It is typically not economically viable to provide 100 per cent redundancy within a traditional BTS. However, in the case of a BTS hotel, N+1 redundancy can be used (i.e. the provision of one redundant BTS covering a number of active BTS systems within the BTS hotel location).
Significant disadvantage
The significant disadvantage with the BTS hotel architecture is, however, in the cost of the fibre optic links that run between the BTS hotel and its remote units. Installing new fibre – if there is none in existence already – involves significant civil works and is therefore extremely expensive. There are, however a number of examples of various types of BTS hotel in operation today, covering applications in city centers, at airports and for major sporting events.
So, natural selection has been a part of the evolution of the earth's species for billions of years and has proved to be a successful method of ensuring that the best-adapted species are available to maintain our ecosystem. In the world of base-station engineering, the same principles apply – however the timescales are dramatically shorter. The open specifications provided by OBSAI shine a powerful light on the future evolutionary path for the BTS – it will be interesting to see who is the fittest and, hence, survives this new dawn.                                             

Peter Kenington is the Managing Director of Linear Communications Consultants  and the Technical Chair of OBSAI.  Email: pbk@linearcomms.com

A fundamental part on handset design is usability testing – the measurement of the ability of users to complete tasks effectively.
Mats Hellman explains

As mobile handsets evolve into ever-more sophisticated devices, with an ever-expanding list of capabilities, it is vital that users are able to access the features they want quickly and intuitively. Users need to feel equally at ease accessing Internet pages or sending a multimedia message from their mobile devices as they do looking up and dialling a friend’s number. By creating a satisfactory user experience, handset makers can earn long-lived popularity and loyalty that goes beyond the initial appeal of stylish design.
It is increasingly important for mobile device makers to put special emphasis on user satisfaction in a world where consumers have a huge choice and can switch their handsets regularly with little effort or cost.
Central to the creation of winning handset designs is usability testing: measuring the ability of users to complete tasks efficiently and effectively. Furthermore, it is vital to check that user expectations are satisfied before new models are introduced into the market.
Usability testing involves putting processes in place to gather good user input – and then using it well – to ensure a good user experience. This may sound like common sense, but our experience shows such techniques are under-used across the mobile industry.

Ask the true experts
Putting the user at the heart of the design process is an extremely effective way to gather the more subjective feedback needed to enhance design. By integrating usability testing into product development, and by working closely with users we begin to understand how users experience efficiency and effectiveness, rather than just trying to construct objective measurements of efficiency and effectiveness themselves.
Many handset designers are tempted to base their usability metrics on a composite of simple observations of number of clicks or number of errors made when accessing a particular function. Today, the most popular way of testing usability is to carry out consumer surveys and undertake lab testing. While these elements will both produce useful results, they leave little or no room for further probing should any unexpected issues arise as part of any feedback.
What’s more, large-scale surveys might produce substantial amounts of data, but all too often this data is not fed back into development teams effectively. Unless usability testing is conducted continually – with rapid feedback into the design team – the user data can quickly get out of sync with the design process, and it gets harder for designers to ask follow-up questions while they’re still working on with the relevant feature.
Measuring the number of clicks and errors users make only tells us so much. Besides, it could well be that making more clicks to access a particular function is more effective if the user feels the process is more logical and easier to follow. A screen crammed full of supposedly helpful icons – while needing fewer clicks – might be more confusing to the user than having a smaller set of icons that each lead to an extra layer of options. Furthermore, errors made as part of the learning process are not always viewed as unsatisfactory by users. On the other hand, errors that are made repeatedly as a result of poor, non-intuitive interface design are very frustrating.
One way to address this issue effectively is to create a permanent ‘test expert’ who acts as the user advocate throughout the development process. This role follows the traditions of participatory design and ethnographic approach to research, as well as suiting modern methods of software engineering.
Building on its success in computer system development in industry, participatory design is now finding favour in mobile handset usability testing as a way of bringing user expertise into the design process. UIQ Technology has itself developed a model for measuring attitudes directly that eradicates the need to second-guess certain behaviours and maintains the focus on the user experience, rather than user behaviour.

Testing in the comfort zone
In this new model, the test expert sits down with users to evaluate design alternatives, and discover new options, to find out what makes a satisfying design. The goal is to understand and describe the use of mobile phones as perceived by the users; to collect data from the inside, not the outside.
More meaningful results are achieved by conducting usability testing in familiar locations – rather than in test environments – where users feel more comfortable and open to sharing their experiences and views. By working side-by-side with users, software developers can help them play an enthusiastic and engaged role in the design process – resulting in handsets that work for them.
Ideally, the user and test expert should test the phone together. The user performs a number of tasks and discusses the experience of using the phone with the test expert. The test expert monitors the way the user accomplishes the task and notes any difficulties. They both respond to written statements of attitude to record their evaluations of the phone.
The next stage is to put the results of the interactive testing to good use among two key audiences: designers and decision makers. Designers want concrete, immediate feedback to be able to improve interaction design during every step of the process. Decision makers, on the other hand, prefer cold hard facts in diagrams to present at meetings.
The test expert’s observations, together with the evaluations made with the tester, provide good detailed feedback to designers. The advantage of carrying out continuous testing and relaying test outcomes informally and directly to designers is extremely short turn-around time, from problem discovery to implemented solution.
To suit the needs of decision makers, rather than being presented as pure numbers, the usability metrics for a given phone are expressed as positions on a two-dimensional field, in which user satisfaction or dissatisfaction is plotted against efficiency. This offers an at-a-glance view of how satisfactory the user experience is, and where resources need to be allocated for improvement.
This approach differs from usability metrics that aim to determine if one system is more usable than another through experimental methods similar to those used in the natural sciences in that it is grounded in the social sciences. The test results are not validated through future repetition, but through the knowledge and experience gained by the test experts, and how these influence the development of the mobile handset to ensure the best possible user interaction experience.
Confidence to move forward
UIQ Technology itself learned a lot from this process in the move from UIQ 2 to UIQ 3 – which made the change from a user interface that was primarily pen-based to one that could be solely key-based. The intensive usability testing programme we put in place gave us the confidence to make a number of drastic design changes.
One such change was the introduction of the new navigation control for list views in the Contacts application. This provides a ‘peep-hole’ into a given contact’s details without having to open the full contact page. Each item, such as a phone number or email address, can be acted upon directly from the list view (to initiate a text message, for example). Our user testing uncovered a number of design wrinkles that had to be ironed out. Usability testing also really helped in the development of the Agenda application, where we discovered a number of important differences between using the calendar with one hand (in soft-key mode) and with two hands (in pen mode). The test users contributed greatly to finding good solutions for the redesign.
The process of implementing one-handed navigation has been aided greatly by having users available on a daily basis, with the test expert maintaining user focus, rather just doing ‘unit testing’. Extending a touch-screen, pen-based UI to serve as a soft-key, non-touch-screen UI as well, on just one codeline – while offering consistent navigation behaviour between the two UI styles – was no simple task! It would not have been possible in the time taken without the kind of user input we gathered through our usability testing model.
Evaluating and acting on real consumer experience and satisfaction need not be an insurmountable obstacle – as it is widely perceived in the industry. There can be no excuse for any consumer dissatisfaction with a phone that is already on the market. We all need to learn to do the right thing from the start: it’s for our own good – as well as our customers’.                                     

Mats Hellman is Head of Systems Design at UIQ Technology

Richard McBride looks at the profitability challenges of the iPOD phone and examines how OSS applications such as interconnect billing and mediation can help operators transform this new musical device into a commercial success for all

Just when consumers were getting used to the idea of 3G, along comes another mobile gizmo to get all excited about. Get ready for the iPOD phone, Apple's ticket into the world of wireless communications. Worried about mobile operators eating away at profit margins with their latest hybrid music phones, Apple has developed an iPOD handset that promises to set new trends and blow the competition away. This is a strategic business move for the computer giant, whose iPOD media music player has become the most successful hand-held musical gadget in history.
Apple's achievement is remarkable, but compared to the mobile phone industry, iPOD's future growth rate seems small and limited in scope. This explains Apple's decision to produce an iTunes phone with global leader Motorola, which will be launched with much fanfare in the first quarter of 2007.
Like all new technologies – most notably 3G – the new phone has been the subject of much controversy, with some pundits warning of business cannibalisation, cost-prohibitive technology and overambitious designs. The pod-phone's dream of massive global sales cannot be achieved under these conditions, leaving Apple with a device that, once again, only sells to the same, small number of users. Apple and Motorola have done nothing to assuage the negative press either. Instead, the mega brands have admitted to 'significant hurdles' in its attempts to create a pioneering new phone. 
Show me the money
Making money for everyone is probably one of the biggest hurdles of all. After all, despite the phenomenal success of Apple in building a market charging for music downloads, research has shown that more than 80 per cent of iPOD owners do not pay for digital music. Just 17 per cent of European iPOD owners purchase music on a monthly basis, according to a report by Jupiter Research. However, 30 per cent of iPOD owners download music illegally for free and 23 per cent do the same with video. The figures include paid-for music, as well as tracks downloaded free from legal and illegal sites.
The iPOD phone, therefore, has to come up with a way to make sure that it's a profitable gadget for everyone involved. But Motorola and Apple have very different views about the future of music on mobile phones than the network providers. The duo wants to let customers put any digital tune they already own on their phones for free. That would help Motorola increase phone sales while helping Apple expand its dominance of digital music. The wireless operators, however, want customers to pay to put music on phones, and if this isn't possible then Apple and Motorola must offer other opportunities for them to make money.
Once Apple-Motorola and the carriers figure out a costing plan for all the music downloads and a way to divvy up razor thin profit margins, the next step will be for operators to find a way to bill for the services.        They will need to know, for example, how to charge teenagers for downloading the latest music in real-time on the iPOD phone or offer cheap downloads of seventies favourites to yuppies, while enjoying maximum profitability. 
Indeed, the complexity for charging and giving customers what they want – without losing money – is greater than it's ever been for operators – mainly because the profit margins for music downloads are very slim for operators. That is why effective intercarrier billing and mediation will be crucial for operators if they want to maximise revenue from every bit of network traffic on their networks. This can only be established with a flexible, real-time technology that can track any type of data generated by services supported by the iPOD phone and to use that data for settlement purposes. This is key to making things work in this new mobile environment.

Greater understanding
Fortunately for the mobile business world, technological advances have been made to enable carriers to gain a greater understanding of how to settle bills with customers while providing them with clear pricing options in real time – whenever and however they want. And, surprisingly, it is probably the easiest problem facing Apple, Motorola and the mobile carriers. Most operators in Europe have already upgraded their networks to accommodate the need for this type of business advantage when 3G was launched. 
Equally important to the success of the iPOD phone is the ability to maintain relationships with intercarrier partners. Profits may be slim, but operators must find a way to be compensated by ISPs – in IPOD's case, the music companies supplying tunes – for the use of their networks to support services. Mobile players must also have the capability to compensate partners for delivering services – to the mobile users. Billing and mediation applications make up the central nervous system in the network. Convergent mediation gathers the network usage data for billing purposes and feeds to the interconnect billing system, which then produces the bills necessary for compensating everyone involved in delivering a service to a mobile customer. This method of billing is usually the first or second source of income generation for operators.
With the right back-office systems in place, operators peddling the iPOD phone can automatically split the bill with its content music providers so everyone is making as much money as possible through the iPOD service delivery chain. This method of billing can potentially generate hundreds of millions of dollars in revenue, most of which can be reinvested in the business to keep end-user service costs low. Technical flexibility in OSS billing is important to bring new services quickly to market – without having to worry about the process of charging end users or third party vendors. Every calculation is done automatically, so operators can spend more time on developing their business plans and focusing on what the market wants from the mobile provider. Within a single platform, the operator can process, calculate and print subscriber bills without any human intervention. Best of all, intercarrier solutions can support the rating and billing for any type of billing event, regardless of customer type, product or payment method.
Many OSS billing solutions are ready for iPOD's billing challenges, but are the mobile operators ready to capitalise on the flexibility of such a solution? 
The big challenge for Apple and Motorola isn't about the technology surrounding the make up of the phone or the lack of technology to help bill for services. Everything is available except for the commercial models that are going to sell the gizmos. This is the biggest hurdle left for the iPOD phone and probably one of the main reasons why its launch has been put on hold for so many months. Coming up with a way to profit from this technology will be a big challenge but once its overcome there will be music in everybody's ears for many years to come.                                         

Richard McBride is Director of Settlement and Strategy at Intec   www.intecbilling.com

Following World Telecom 2006, the ITU assesses the connected world we live in today – and how technology will shape tomorrow

Sensing the increasing intensity of the sunlight, your curtains open, and the bedroom radio switches itself on. As you shower, the daily news relevant to you is projected onto the bathroom wall, your microwave oven reads the RFID tag on your food wrapping and cooks you breakfast. And to ensure you don’t run out of your favourite meal, a message is sent from the refrigerator to the nearby store to order more. 
This may sound like science fiction, but in the near future, the digital revolution will take on an entirely new dimension, with the development of networks and computing based on technologies like RFID (radiofrequency identification) and sensor networks. In this digital future, the world’s networks will not only connect people and data, but also everyday objects. Mundane daily tasks will become increasingly automated, and the technology behind them will fade from the perception of the user. This means we can expect a very near future where the Internet is still the fundamental tool of the day, but we will rarely need to sit at a computer to use it. 
We are heading into a new era of ubiquity, where the users of the Internet will be counted in billions and where humans may become the minority. Instead, most of the traffic will flow between devices, creating a much wider and more complex ‘Internet of Things’.
If humans continue to be the only Internet users of the future, then the total user base might conceivably double, but it is unlikely to go beyond two billion active users. On the other hand, if ‘things’ become active Internet users on behalf of humans, then the number of connections could be measured in terms of hundreds of billions. Fridges communicating with grocery stores, laundry machines with clothing, implanted tags with medical equipment, and vehicles with stationary and moving objects. It would seem that science fiction is turning into science fact in an Internet based on ubiquitous network connectivity  – truly ‘anytime, anywhere, by anyone and anything’.

Powering the future
And it’s not as futuristic as it sounds. An expanded Internet already exists that can detect and monitor changes in the physical status of connected things, through sensors and RFID, in real-time. Developments in miniaturisation have enabled technological ubiquity, and networks – and the objects they connect – are also becoming increasingly intelligent, through developments in ‘smart technologies’.
   In the industrialised world, advances in RFID have focused on supply-chain efficiency. However, remarkable initiatives in Asia bring a whole new world of potential. The Blind Navigation Project currently being trialled in Tokyo to assist visually impaired people to move through the city streets is being monitored and mimicked in similar schemes in the USA. Near-field RFID communication enables mobile phone handsets to make payments, and shoppers to be alerted to the nearest reduced price bargain.
Digital technologies are fast becoming indispensable. A growing array of devices and technologies are on offer today, making users much more mobile. These range from slimmer and faster laptops, to MP3 players with video capabilities and mobile phones with high-speed Internet access. It took almost 21 years to reach the first billion mobile users, rapid progress when compared with 125 years it took fixed lines to reach a similar figure. However even this pales in comparison with the mere three years it took the second billion mobile users to sign up.
The evolution from second to third generation mobile networks is arguably just as important as the jump from analogue to digital and is proceeding much more rapidly. And at the same time, broadband networks and media convergence are generating new avenues for distributing digital entertainment. User devices are now multi-functional and increasingly personalised. In the future, advances in connected computing will enable millions of things to use the Internet.
How we use technology has changed and computers and mobile phones can be used for purposes as diverse as recording audio and video, to gain political power, for prayer, for entrepreneurship, and to create and build social networks. Digital innovation is rapidly expanding to all aspects of daily living. Digital homes, with sensor-enabled blinds, online security systems, customised entertainment systems, and intelligent appliances are all already on the market. With contact-less payment systems, seamless digital transactions are possible online and via mobile devices. And content can be delivered depending on the preferences and location of the user. Such context aware services have become a priority for service providers as the need to keep abreast of constantly mutating user lifestyles is becoming ever more essential. It is expected that the computer as a dedicated device will disappear, as eventually even particles and ‘dust’ might be tagged and networked. These kinds of developments will make what are seen today as merely static objects into newly dynamic things, embedding intelligence in our environment, and stimulating the creation of innovative products and entirely new services.
Getting started on the road to a ubiquitous networked society does not even require too major a financial outlay. The cost of putting RFID on a product carton is 5 US cents – and the case for doing it compelling enough to encourage many major US retailers to tag everything from a yoghurt carton to a pair of socks. Already RFID is constantly feeding individuals and things with information. Never before has information been so available and so accessible. And soon there won’t be anywhere that you wont be connected. And not just you – but your watch, your pen, or even your evening meal. Stroll around a supermarket with an RFID-reading phone and receive information about any product, link to an Internet-full of information and find out if what you’re buying is right for you. And as you’re doing it you’ll be providing the store with information about how you move around the aisles, what you purchase and what offers attract you.                                                   
Needless to say all these developments will have important implications for society and individual lifestyles, and will impact business strategy and policy priorities. From the individual human user’s point of view, will people suffer from information overload?  Will the effect of having every medical source book to hand as you pick up a box of vitamins turn us all into ‘cyberchondriacs’? Will addiction to online living take a rise as the virtual world seems to offer more than reality?  Identity theft?  Persistence of a greater variety of spam?  The horror stories are there for those tapped into the ubiquitous network. But what about those human users that have yet to connect? Will people still have a choice of whether to be a technology user, to have a phone or a computer, or not?
One in three humans in the world (man, woman and child) currently owns a mobile phone.  Those resident in the developed world, with the ability to connect but not the desire, may well suffer social pressure and miss out on services and information. A new class divide could emerge, between those online and those offline.

Access on a global scale
Choice is one issue. But access is another. The impact of technology, particularly mobile communications on the developing world is significant. There are one billion poor people who lack access to phone services and are willing to pay for them. Studies have shown that people in some of the poorest countries are ready to spend a significant part of their income on ICT to help improve their social and economic well-being. In Namibia, Ethiopia and Zambia, for example, households spend more than 10 per cent of their monthly income on phone services because it helps them save money in other areas. The estimated average technology spend in developed countries is about 3 per cent of monthly income.
In 2004 alone, the African continent added almost 15 million new mobile cellular subscribers to its subscriber base, a figure equivalent to the total number of (fixed and mobile) telephone subscribers on the continent in 1996, just eight years earlier. But it is not enough to look at the growth of mobile subscribers to understand the impact that the mobile phone has made. Besides providing many rural areas that used to be excluded from any form of communication with access, the mobile phone has improved people’s lives in many ways.In Uganda, for example, farmers can use their mobile phone to find out about the latest crop prices. Instant and direct access to market prices increases their revenues, provides them with valuable information to negotiate and protects them from being cheated by middlemen. In South Africa, the Compliance Service uses SMS to remind tuberculosis (TB) patients to take their medication. TB patients must follow a difficult drug regime over an extended period but often fail to do so simply because they forget. Non-compliance with the drug treatment has exacerbated TB cases and been a burden on the local health care service by wasting precious medicines. The project, which started in Cape Town in 2002, has substantially decreased the number of treatment failures.
The prospects of the ubiquitous networked society are limitless but it is vital that efforts are made to understand and educate users and businesses about the opportunities and the tradeoffs. One of the core aims of the International Telecommunication Union (ITU) is to minimise digital exclusion and maximise digital choice. Although there is no one-fix for every country’s issues, there is best practice that can and must be shared. 
In a not too distant future, watch as evening falls, the electric lights switch on and as you move from one room to another, the ambience music level adjusts, the room temperature changes and your dinner cooks itself. You scan your room for a missing sock and receive a message that the present you ordered overseas has arrived in the country. Meanwhile, your garbage can is talking to your local council.                                 

www.itu.int

Hugh Roberts looks at how service providers are developing their offerings and the consequences it may have for OSS/BSS systems which are already in place

Depending on your viewpoint, 2006 has either seen the marketplace for consumers wishing to purchase communications services get substantially simpler – or quite possibly the opposite. Historically, companies were set up to exploit discrete customer needs: a fixed wire telephone company for your home phone; a cellular operator for your mobility; an ISP for accessing the internet (with or without the availability of premium content and hosted services); and a satellite or cable provider if you wanted to increase your broadcast choice.
 Although there was some crossover – with offerings such as cable telephony and video on demand – it wasn’t too difficult to work out which company to go to for which service. Perhaps more importantly, it was a fairly easy matter to identify how much you were likely to pay for each, and how you were going to get charged and billed for each.
Then overlaps and virtual offerings began to occur. In the UK, for example, cable providers extended their telephony offering to include broadband, and in some markets the choice of alternative voice providers multiplied into the hundreds, existing high street brands and ISPs started to offer virtual fixed and/or mobile services, and the mobile companies themselves – both network operators and MVNOs – now offer fixed, broadband and entertainment services. Even the satellite players are getting in on the multi-play act.
In the new world order of communications, aggregated branded multi-service bundles and M&A are now radically reducing the number of companies that customers can buy services from. While ‘voice’ is consequently getting cheaper and this is being presented as ‘a good deal’ for the customer, even with all-you-can-eat pricing being applied across the board, it is surprisingly difficult for subscribers to work out exactly what they are going to pay, and whether it will actually be a less expensive and better quality overall level of service than was available before. In a number of cases customer rebellions have been highlighted in the media as these companies struggle to overcome the challenges of rolling out network and OSS infrastructure to keep up with demand, and BSS systems to integrate their customer information management and provide seamless and trouble free customer services with unified billing. 
In any multi-play bundled offering it seems to have become a golden rule that at least one of the key elements should be positioned as ‘free’, but most consumers have learned the hard way that there is no such thing as a free lunch. In many cases the small print, thresholds and cross contingencies on the usage of the other services for full qualification are extremely complicated and difficult to work out.
And all this before customers have to make sense of the choice of ‘delivery channel’ for converged services such as broadband Internet access or streamed video to mobile. Curiously, early usage indications show that the majority use of IPTV is in the home and not in ‘a mobile context’, and that a large proportion of this usage occurs in the same room as a TV set capable of accessing the same content! The sophistication of games has certainly increased on mobile phones, but the range of communications services now available on games consoles, and even within gaming environments themselves, put the majority of handset enhancements to shame.
Bigger (and ‘wider’) is apparently better in the opinion of the new aggregated operators, but is this merely about the ability to leverage economies of scale? The primary driver – at least in markets where the penetration levels of one or more of the core service offerings is high – is to do everything possible to retain customers, increase their loyalty, and hopefully also capture and increase revenues from them. Clearly, the larger the number of services a customer has from one provider, the less likely he is to churn, not least because of the increased lifestyle disruption caused. However this is only part of the answer. There is a critical need for the services themselves to be ‘sticky’, or the whole value proposition may be compromised. Customers are getting smarter: it is no longer enough to have a brand, or premium content, or low (and understandable) pricing, or good quality customer service – all of these are now required to be competitive.
So, is it possible for the small and niche players to be winners? At the backbone layer, the answer would appear to be a resounding ‘no’. The satellite and cable industries have already reduced to a minimum and the number of pure ISP players is rapidly reaching the same point. No one is entirely certain exactly how many mobile network operators are viable in a given marketplace, but the evidence from the US is that in most developed countries the number is probably too high. There are some niches evolving in the infrastructure domain as new technologies such as WiFi/WiMax develop, but even these niches are rapidly being squeezed between the dominant operators and the growing positioning of ‘social information infrastructure,’ such as state or city owned metropolitan networks.
Notwithstanding some brand and service differentiation (reflecting different operators’ market segmentation strategies and consequent capture of premium content offerings), it is getting harder to tell the difference between the companies that remain in the marketplace, irrespective of where and how they originated. There is a growing customer expectation in highly penetrated and competitive markets that any CSP should be capable of offering a full (virtual or real) multi-play package and that substantial price reductions or cross-product discounts should accrue.
This would be the end of the story, apart from the fact that three factors are coming together to turn the existing communications industry structure on its head:
The mechanisms for revenue generation within the telecoms value chain are changing
The so-called ‘X-Factor’ companies such as Google and eBay/Skype are changing the value proposition for CSPs from one based solely on subscriber revenues to one which sees customers as both source of income and key resource in the chase for advertising and service sponsorship money. Revenue splitting with subscribers for ‘customer generated content’ is emerging as a prime source of income and loyalty for service providers – an ecosystem based on ‘trading relationships with everyone’ is replacing the ‘uni-directional revenue flow’ model.
As adverting-derived income erodes customer content revenues even further (moving from ‘all-you-can-eat’ to ‘free’ to the subscriber), eCommerce and financial transaction processing will take their place as critical revenue drivers (although content will remain the primary support for brand positioning). Prepaid (real-time) micro transaction expertise is one area in which the telecoms industry can claim to be significantly more advanced than any other, and the scale of repatriation of funds to family and friends from northern to southern hemisphere via mobile phone is testament to this.
The industry is going to restructure along horizontal rather than vertical lines under both regulatory and commercial pressures
The EU’s support for structural separation – the splitting of the infrastructure and services divisions – is well documented. Citing the break up of AT&T into the Baby Bells in the US in the 80’s, and OfTel’s (now OfCom) requirement for BT to establish ‘operationally separate’ business units in the 90’s as precedents, the Commission seeks to guarantee fair access and to promote competition and investment across the region.
Commercially too, this makes sense. Building a     business around the ‘bit pipe’ is, an extremely profitable business, but only if aggregated to achieve economies of scale; owning multiple technology channels and platforms to be able to ensure best cost/QoS delivery and avoid ‘disruptive technology’ pitfalls; and perhaps most importantly of all, to trade only in a B2B context, as the cost of maintaining a full consumer brand presence would be unsustainable.    
Affinity groups are rapidly growing in importance as the primary mechanism for delivering high value service to niche customer groups
The secret of affinity groups (as opposed to communities of interest) is that they are driven by customer pull rather than service provider push. These have existed on the Internet for some time, but it is with the inclusion of mobility that their true value will be realised. It may be that ISP expertise and experience in handling closed user groups may prove their most important and lasting legacy to the communications ecosystem.
If an affinity group has access, content and mobility included within its remit, how does this differ from an MVNO multi-player? The structural answer is ‘not a lot’! The marketing answer is, of course, ‘quite a lot!’, and comprehending this will be the secret to understanding the ‘4G’ environment as it evolves. The niche players of the future will be the Customer Lifestyle Providers (CLPs) supported by a combination of Virtual Service Enablers (VSEs) and Network Service Providers (NSPs).
So, turf wars? For the operators that have successful subscriber and partner relationships, the transition may not be too uncomfortable. Thought should go to the larger ISV and SI players in the OSS/BSS supply chain, whose product sets, deployment models and value propositions may be based on architectures and business unit inter-relationships that no longer match the requirements of their existing customers as they evolve from one-dimensional caterpillars into four+-dimensional butterflies.                                                       

Hugh Roberts is Associate Strategist - Logan Orviss International

Doug Overton investigates the causes behind the mobile ‘No Fault Found’ phenomenon and how the industry can solve this vastly expensive problem

Incredibly, one in seven mobile phones are returned within the first year of purchase by subscribers as faulty, according to research by Which? This statistic will doubtless raise eyebrows and drive speculation over product design flaws and standards in build performance. However further analysis into the nature of these returns reveals the even more disturbing statistic that 63 per cent of the devices being returned are done so without fault.
This figure, unearthed as part of a study into mobile device returns trends in the UK, places mobile phone 'No Fault Found' returns at a level 13 per cent above the average within the consumer electronics sector.
With operators, manufacturers and retailers collectively covering administration, shipping and refurbishment costs approaching GBP35 per device, this equates to a potential cost to the UK mobile industry of GBP54,016,200 and more significantly a global industry cost of GBP2,274,399,029 (US$4,499,898,480).
So why are so many devices returned without fault? WDSGlobal is working alongside a leading UK mobile retailer to implement mechanisms and services to significantly reduce the impact of the NFF phenomenon. Analysis of over 15000 monthly calls arriving at the specialist retail returns/diagnostics line provides a valuable insight into the causes behind the trend.
In some of these instances (24 per cent) the user had resorted to abandoning the device after a lengthy and frustrating battle with the usability of functions or applications. This provides a clear indication that manufacturers still need to invest considerable time and effort into the user centred design and modelling of device software.
A recent study in the Netherlands reported a 20 minute average time in which the user will attempt to use a service before abandoning it. Alarm bells should be ringing for the industry when the manual set-up of an e-mail service on a device alone takes a minimum of 20 minutes even before the user attempts to understand how the program works.
Some 8 per cent of users were simply attempting to return a device on the basis that it did not fulfil the purpose for which it was sold. This may be a consequence of inadequate marketing on behalf of the manufacturer, but is more often than not attributed to a knowledge deficit at the point of sale. The majority of mobile retailers are not equipped with the expertise to provide informed advice on the more complex features of mobile devices. Many high-end mobile phones are now differentiated through data communications technologies including GPRS, EDGE, UMTS and Bluetooth, which are complimented by an equally confusing array of applications such as WAP, MMS, e-mail and Streaming media.
 If the mobile retailer is anything short of fully briefed on the benefits and application of these technologies they are highly likely to be furnishing the customer with inaccurate or insubstantial advice. A recent mystery shopper survey** carried out by WDSGlobal identified that only 20 per cent of retail staff could provide a moderate description of what 'BlackBerry' functionality could provide within a device, despite its prevalence as a powerful differentiating business function.
For some retailers, the provision of inappropriately positioned devices to customers is also a reflection of store policies to ship specific models based on margins, stock levels or promotions rather than matching requirements to solutions. The same retail survey alarmingly identified that only 60 per cent of leading high street retailers adopted an impartial customer focussed approach to sales, based upon listening to the requirements of the customer.
It is little wonder that angry and frustrated customers try to return devices as 'faulty' when they were ill advised at the point of sale. The more astute retailers are already embracing in-store kiosks and other knowledge based point of sale platforms in an effort to prevent this happening.
The most significant contributor to the 'No Fault Found' problem derives from users who – quite understandably – diagnose lack of connectivity to services such as WAP or e-mail as a fault. The reality however is that many devices are purchased in an un-configured state for use with these services. While many operators attempt to set up services for immediate 'out of the box' usage the more popular applications such as e-mail will be left to the 'DIY' devices of the user.
Subscribers who swap networks while maintaining their equipment, or those who purchase imported, second hand or SIM-free equipment will comprise the growing number of users who will be in an 'unconfigured' state for all services.
Out of the 300,000 calls received by WDSGlobal into a specialist tier two support environment in Q2 of this year, 47 per cent of the issues faced related to problems associated with mobile service configuration. This in itself is a major concern, even more so when it presents a 2 per cent increase on the same statistic drawn in 2000. The problem is not going away and as device and service complexity continues to develop at an unprecedented rate the user experience only stands to worsen.

The true cost of the problem
Working within the industry it is often hard to empathise with the pain experienced by mobile subscribers. Assumptions that users will embrace complex device configuration menus or self-serve portals often fail to recognise that most users simply expect devices to work without understanding the underlying complexity. This is not difficult to understand when all other consumer electronic devices including MP3 players, portable gaming units and digital cameras simply work when they are taken from their boxes.
A user spending GBP500 on a sophisticated mobile device is doing so on the understanding that it will improve their personal productivity or simplify part of their lifestyle. However when this benefit comes at the expense of time spent elsewhere engaging with customer service agents or ploughing through convoluted instructional guides, the exercise becomes counterproductive.
The result is that either the device is abandoned (returned without fault) or the service itself is abandoned, relegating a potentially powerful communications tool to the status of an expensive personal organiser; neither situation is healthy for the consumer or the industry.
The $4.5 Billion quantitative cost of this problem is easily recognised, and for mobile operators and manufacturers can be easily absorbed into the growing overhead associated with launching new products, or in many cases simply hidden within the an inflated consumer price tag.
What is of greater concern is the less tangible qualitative issues at stake. Mobile subscribers are becoming increasingly despondent with mobile technologies, and a frustrating user experience has sadly become the rule and not the exception. Brand loyalty and subscriber churn once again come under fire as mobile users migrate between device vendor and mobile operator brands in an ostensibly eternal quest for an optimal user experience.
The mobile phone has become the poor relation of its consumer electronics cousins, and while many parallel sectors receive recognition and accolade for innovation and design, the mobile industry continues to draw bad publicity.
In an age of rapid innovation, where industry prophets foretell of entirely converged consumer electronic devices in the near future it is ironic that the mobile phone appears to be at the hub of the convergence. If mobile manufacturers and operators truly wish to form the vanguard of convergence innovation then there is still much to learn from their consumer electronic counterparts.

Mitigating the 'No Fault Found' risk
The NFF problem is not going to be solved overnight. It is a problem that has developed and festered over many years in conjunction with rapid industry growth and technology innovation.
The root causes however are not shrouded in mystery – they can be catalogued and analysed systematically with a view to preventing them in future product launches. Most operator support centres will carry detailed call records, explicitly capturing the frustrations faced by subscribers at the coal-face; similarly most reverse logistics organisations or departments will accurately log the inherent drivers behind NFF returns.
It is this data that provides the market intelligence for mobile industry Product Managers to mitigate problems with the launch of future products. Every problem can be traced back to a deficiency in the device design or the channels, processes and mechanisms that surround its launch and in-life support. Most of these issues may be addressed through stringent device testing and usability modelling prior to the launch of the mobile device. Furthermore, the effective empowerment of sales channels and support centres with specialist knowledge will help to assuage much of the NFF problem.
It is a simple and logical process, which if pursued on an ongoing basis, should realistically show a reversal in the trends of high volume customer support calls and 'No Fault Found' returns inside two years.
 
Doug Overton is Head of Communications at WDSGlobal   www.wdsglobal.com

Malcolm Dowden charts developments in the regulatory framework, which is designed to encompass the European communications market

The electronic communications sector has come under increasingly close scrutiny from the European Commission, and 2007 looks set to be a particularly active year for the industry's representatives in Brussels and beyond. The Commission has stressed that it is ready to revisit any aspect of the Regulatory Framework, provided that this contributes to the attainment of the Lisbon 2010 Agenda objective of making the EU the most dynamic and successful knowledge-based economy by the end of this decade.
The current Regulatory Framework dates back to only 2003, and transposition into national law has only been completed in 2006. Indeed, some of the member states have not yet adopted the necessary secondary legislation or completed their market reviews.  Nevertheless, a review has been initiated to ensure that the legislative and regulatory regimes can take account both of the rapid pace of technological developments and the competitive landscape of the industry in the EU's range of established and emerging markets.
The Commission is examining all of the key directives on which the Regulatory Framework is based, together with Article 8 of the Electronic Communications Competition Directive. The Commission's principal objective is to remove any obstacles to the provision of faster, more innovative and competitive services. Further, the Commission has made it clear that the exercise will extend to the regulation of next generation networks and the liberalisation of radio spectrum.

Extended powers
In one of the main 'on-off' stories of 2006, the Commission has also been considering whether its powers under the Regulatory Framework should be extended to create a single regulator for the EU's electronic communications sector and also to include an ability to veto remedies imposed by national regulatory authorities. The idea of a single regulator was floated in June 2006 by Viviane Reding, the Information Society Commissioner. If introduced, it would resemble the European System of Central Banks in structure, with local regulators responsible for analysing local market conditions and reporting back to the EU's regulator to ensure that European law is applied equally across the continent. Introducing the concept the Commissioner observed that this lack of harmony gives some countries an advantage over others, which is “unacceptable” and “an obstacle to the internal market to effective competition”.
The Commission intends to table draft legislative proposals amending the Regulatory Framework before the end of 2006 or early 2007. These proposals will then be transmitted to the European Parliament and Council for adoption under the co-decision procedure. The Commission's target is implementation and transposition into national laws by 2009-2010.
In parallel, a revision of the Recommendation on relevant product and service markets within the electronic communications sector is underway. The Recommendation lists a number of wholesale and retail markets susceptible to ex ante regulation by the member states. The Commission is proposing to reduce the number of markets from 18 to 12. The only remaining retail market covered by the Recommendation would be access to the public telephone network at a fixed location. 
There is also a root-and-branch review of the relationship between regulation and the application of competition law. In its 2005 Discussion Paper on the reform of Article 82 the Commission laid down some important policy markers – and in particular the view that competition law enforcement should be effects-based and focus on protecting consumer welfare. This would represent a significant shift in approach, with enforcement action no longer depending on the form a business practice takes, but on its effects.

Competition problems
What effects? Introducing the consultation in 2005, the Competition Commissioner Neelie Kroes made it clear that Article 82 enforcement should focus on real and empirically demonstrable competition problems. In other words, “behaviour that has actual or likely restrictive effects on the market, which harm consumers.”
In the telecoms sector and, particularly in the case of emerging markets, it will be very important to clarify this issue. Incumbent operators often argue that they should be granted a 'regulatory holiday' when they plan to upgrade bottleneck access infrastructure (e.g. from narrowband to broadband). However, as the infrastructure is not readily replicable (due to economies of scale and scope and legacy infrastructure), there is a risk that such holidays might result in retail markets being foreclosed to competition – this did in fact occur in the provision of broadband via ADSL technology in some Member States. 
In its response to the Discussion Paper the European Competitive Telecommunications Association (ECTA) urged the Commission to recognise the relationship between competition law and sectoral regulation and follow up with a more in-depth analysis in sector-specific documents e.g. through an update of (i) the Notice on the application of competition rules for the telecoms sector; and (ii) the Commission's guidelines on ex-ante market definition and assessment of significant market power.
ECTA also called upon the Commission to highlight how rules can most appropriately be applied in sectors characterised by economies of scale, vertical integration, historic monopolies and former state funding. This is particularly relevant to guidance on 'emerging markets', 'leverage', 'efficiencies' and 'refusal to supply'.
Critically, ECTA asked the Commission to clarify what is meant by “capability to foreclose competition”. In the telecoms sector, where behaviours such as margin squeeze can cause considerable and lasting damage, it is important that a case can be brought before “actual foreclosure” has occurred. It would be little comfort to a market entrant that its failure might subsequently be used as evidence of anti-competitive behaviour on the part of an incumbent. 
There are indications that Commissioner Kroes is considering a sector-wide inquiry in telecoms in 2007. Having recently set about investigating the energy and financial services sectors, it is thought the Commission may examine competition and the state of liberalisation of the telecoms sector as early as next year.
Sector inquiries typically begin with extensive questionnaires being sent by the Commission to industry players. They are organised and carried out by DG Competition in conjunction with the other relevant services of the European Commission. For telecoms, that would be the services of Viviane Reding, the Commissioner for Information Society & Media.
There is no doubt that 2007 looks set to be an interesting and critically important year for the electronic communications sector.                                       

Malcolm Dowden is an Associate with Charles Russell LLP, and can be contacted via e-mail: Malcolm.dowden@charlesrussell.co.uk

Projects fail - some spectacularly - and, looking at project management statistics, if you were a betting man you wouldn’t lay money on their successful delivery. But is failure inevitable? Are project managers doomed to spend their careers in a never-ending groundhog day of disaster? Brendan Loughrey looks at the best ingredients to prevent the bitter taste of failure

Project failure is not discriminatory – it pretty much affects all sectors and all countries in equal measure. Whether it's an embarrassingly late delivery by a Department of Defence that has spent millions on helicopters that aren't safe to fly, or a contractor featured in a recent scathing report that had only completed six out of the 150 health centres it was supposed to be building in Iraq, despite having already spent 75 per cent of the USD186 million budget.
Louise Cottam, a senior project manager who works in the food, nutritionals, pharmaceuticals and medical devices industries comments: “I've seen the same failure modes across all the industries I've worked in – in fact some of the worst projects I've experienced were within consulting or project management divisions, although the goals and products might be different, failed projects share many common characteristics”.
And it's not like we don't know why projects fail. After all, hundreds of thousands of words have been written to document and analyse the causes of project failure. There are also statistics galore, showing just how very bad some are at delivering projects on time and to budget; while many, many more words have been written to describe features of successful projects and to promote a variety of methodologies that are claimed to improve success rates. Yet still the Holy Grail of successful project delivery seems to slip from our grasp.
Neither is this an inconsequential issue. The consequences of project failure can be catastrophic – failed projects bankrupt businesses, generate humiliating publicity which devastates investor confidence and slashes the stock price, help to bring down governments and soak up huge sums of money that could be better spent elsewhere. In the worst cases they kill people.

Grim...and getting worse
IT and telecoms industry stats are particularly grim and are expected to become worse. The daddy of all IT project failure studies is the Chaos Report by the Standish Group. It's Standish that are behind the often-quoted figure of 70 per cent failure rates for IT projects, which is even more worrying when you consider that their definition of failure is 100 per cent overrun on time or budget. Scarily, in the ten years since the initial study, little seems to have changed. Indeed some industry observers comment that they may even have got worse. Standish stats show that 31 per cent of projects are cancelled before completion, 88 per cent exceed deadline, budget or both, while average cost overruns are 189 per cent and average schedule overruns are 222 per cent.
With global spending on software, hardware and services to support tomorrow's networked, computerised world a multi-trillion dollar industry, financial losses from overruns and failed projects are staggering. When you consider that these projects have knock on financial effects – such as lost business, lost stock value and the requirement to compensate customers – the numbers quickly become unimaginable. The problem is also increasing exponentially due to the number of IT megaprojects as large governmental organisations automate processes and connect disparate systems. One example of a current megaproject is the UK's National Health Service project to automate health records, which it is estimated will cost at least USD65 billion.

Root cause
Everything is not as it first appears; critics of Standish statistics and other surveys point to the fact that the root cause of overruns lies in poor scoping and requirements captured at the start of the project. Peter Bowen, a senior consultant in the telecoms industry explains: “Many companies chronically underestimate the cost of running a project or how long realistically it will take to deliver. Scope creep adds to this problem, with companies failing to understand how much extra time and money adding that nice-to-have feature will cost them. Of the hundreds of projects I have reviewed, the most common problem is the failure to meet expectations. I've lost count of the times I've heard: 'this is not what I want' (customer); 'but this is what you asked for' (supplier); 'but this is not what I need'.”
In fact executive pressure to bring projects in as cheaply as possible often means there is little incentive to identify the true cost or realistic timescales at the start. Doing so would mean that many projects would simply never get off the ground.
So is project failure an inevitable fact of life? Well it isn't at Comunica. Our business is based on our ability to deliver and we have a 0 per cent failure rate. Quick double take. Did he say 0 per cent? Surely not? At Comunica we're proud that we've never let a client down on a date, and we offer a fixed price upfront, so we never overrun the budget. If we go over it's our         problem not the client's. This is one of our differentiators: few companies are brave enough to offer fixed prices and our willingness to do so demonstrates our confidence in our ability to deliver.

Ultimately it's about people
Why are we so successful at delivering projects? Well ultimately it's all about people. The technology we implement is pretty well tested, with absolutely minimal failure rates. The key to our success lies with human issues such as good communication, realism and truthfulness, and creating trust with your employees, with other project staff and with the customer. Our differentiator is the quality of our people.
Formal methods are useful, but they simply do not work out-of-the-box in the real world. You have to be prepared to adapt to the environment and culture, taking what you need from methodologies. You also have to be prepared to review and learn from your mistakes – constantly evolving your approach to make it more effective. Getting the approach right is also important. While there has to be  accountability in projects, a finger-wagging blame culture is counterproductive. Things go wrong and people make mistakes. Success comes from being fair but firm, and earning trust so that people will tell you there is a problem. If your staff trust you enough to tell you they're worried about something then that gives you the chance to fix it. Blaming is easy: being solution-oriented and constructive is much, much harder.                                                         

Brendan Loughrey is Comunica Limited's Project Director   www.comunica.co.uk

Other Categories in Features