Archive

Posts Tagged ‘Telecom’

Infrastructure Edge: Awaiting Development

August 24, 2018 Leave a comment

Editor’s Note: This article originally appeared on the State of the Edge blog. State of the Edge is a collaborative research and educational organization focused on edge computing. They are creators of the State of the Edge report (download for free) and the Open Glossary of Edge Computing (now an official project of The Linux Foundation).

When I began looking into edge computing just over 24 months ago, weeks would go by with hardly a whimper on the topic, apart from sporadic briefs about local on-premises deployments. Back then, there was no State of the Edge Report and certainly no Open Glossary of Edge Computing. Today, an hour barely passes before my RSS feed buzzes with the “next big announcement” around edge. Edge computing has clearly arrived. When Gartner releases their 2018 Gartner Hype Cycle later this month, I expect edge computing to be at the steepest point in the hype cycle.

Coming from a mobile operator heritage, I have developed a unique perspective on edge computing, and would like to double-click on one particular aspect of this phenomenon, the infrastructure edge and its implication for the broader ecosystem.

The Centralized Data Center and the Wireless Edge

 

So many of the today’s discussions about edge computing ascribe magical qualities to the cloud, suggesting that it’s amorphous, ubiquitous and everywhere. But this is a misnomer. Ninety percent of what we think of as cloud is concentrated in a small handful of centralized data centers, often thousands of miles and dozens of network hops away. When experts talk about connecting edge devices to the cloud, it’s common to oversimplify and emphasize the two endpoints: the device edge and the centralized data center, skipping over the critical infrastructure that connects these two extremes—namely, the cell towers, RF radios, routers, interconnection points, network hops, fiber backbones, and other critical communications systems that liaise between edge devices and the central cloud.

In the wireless world, this is not a single point; rather, it is distributed among the cell towers, DAS hubs, central offices and fiber routes that make up the infrastructure side of the last mile. This is the wireless edge, with assets currently owned and/or operated by network operators and, in some cases, tower companies.

The Edge Computing Land Grab

 

The wireless edge will play a profound and essential role in connecting devices to the cloud. Let me use an analogy of a coastline to illustrate my point.

Imagine a coastline stretching from the ocean to the hills. The intertidal zone, where the waves lap upon the shore, is like the device edge, full of exciting activities and a robust ecosystem, but too ephemeral and subject to change for building a permanent structure. Many large players, including Microsoft, Google, Amazon, and Apple, are vying to win this prized spot closest at the water’s edge (and the end-user) with on-premises gateways and devices. This is the domain of AWS Greengrass and Microsoft IoT Edge. It’s also the battleground for consumers, with products like Alexa, Android, and iOS devices, In this area of the beach, the battle is primarily between the internet giants.

On the other side of the coastline, opposite the water, you have the ridgeline and cliffs, from where you have an eagle view of the entire surroundings. This “inland” side of the coastline is the domain of regional data centers, such as those owned by Equinix and Digital Realty. These data centers provide an important aggregation point for connecting back to the centralized cloud and, in fact, most of the major cloud providers have equipment in these co-location facilities.

And in the middle — yes, on the beach itself — lies the infrastructure edge, possibly the ideal location for a beachfront property. This space is ripe for development. It has never been extensively monetized, yet one would be foolhardy to believe that it has no value.

In the past, the wireless operators who caretake this premier beachfront space haven’t been successful in building platforms that developers want to use. Developers have always desired global reach along with a unified, developer-friendly experience, both of which are offered by the large cloud providers. Operators, in contrast, have largely failed on both fronts—they are primarily national, maybe regional, but not global, and their area of expertise is in complex architectures rather than ease of use.

This does not imply that the operator is sitting idle here. On the contrary, every major wireless operator is actively re-engineering their networks to roll out Network Function Virtualization (NFV) and Software Defined Networking (SDN), along the path to 5G. These software-driven network enhancements will demand large amounts of compute capacity at the edge, which will often mean micro data centers at the base of cell towers or in local antenna hubs. However, these are primarily inward-looking use cases, driven more from a cost optimization standpoint rather than revenue generating one. In our beach example, it is more akin to building a hotel call center on a beachfront rather than open it up primarily to guests. It may satisfy your internal needs, but does not generate top line growth.

Developing the Beachfront

 

Operators are not oblivious to the opportunities that may emerge from integrating edge computing into their network; however, there is a great lack of clarity about how to go about doing this. While powerful standards are emerging from the telco world, Multi-Access Edge Computing (MEC) being one of the most notable, which provides API access to the RAN, there is still no obvious mechanism for stitching these together into a global platform; one that offers a developer-centric user experience.

All is not lost for the operator;, there are a few firms such as Vapor IO and MobiledgeX that have close ties to the  infrastructure and operator communities, and are tackling the problems of deploying shared compute infrastructure and building a global platform for developers, respectively. Success is predicated on operators joining forces, rather than going it alone or adopting divergent and non-compatible approaches.

In the end, just like a developed shoreline caters to the needs of visitors and vacationers, every part of the edge ecosystem will rightly focus on attracting today’s developer with tools and amenities that provide universal reach and ease-of-use. Operators have a lot to lose by not making the right bets on programmable infrastructure at the edge that developers clamor to use.  Hesitate and they may very well find themselves eroded and sidelined by other players, including the major cloud providers, in what is looking to be one of the more exciting evolution to come out of the cloud and edge computing space.

The Un-network Uncarrier

August 1, 2017 Leave a comment

Note: The article is the first of a 3-part series which talks Facebook’s Telecom Infrastructure Partnership (TIP) initiative and the underlying motivation behind the same. The next two parts are focused on the options and approaches that Operators and vendors respectively should adopt in order not to suffer the same outcome as what happened in OCP.

When T-Mobile USA announced its latest results on July 19, 2017, CEO John Legere remarked with delight – “we have spent the past 4.5 years breaking industry rules and dashing the hopes and dreams of our competitors…”. The statement rung out true – in 2013 the stock hovered at a paltry USD 17.5 and now was close to USD 70 – with 17 straight quarters of acquiring more than a 1m customers per quarter. If anything – the “Un-carrier” has yet proven unstoppable. While this track record is remarkable, my hypothesis is that in the long term the “uncarrier” impact may seem puny compared to “un-network” impact which could potentially upend the entire Telecom industry – operators and vendors alike. If you are still wondering what I am referring to – then it is TIP, the open Telecom Infrastructure Initiative started by Facebook in 2016 which now counts more than 300 members.

telecoms-infra-logo-600

The motivation for TIP came on the heels of the wildly successful Open Compute Project (OCP). OCP, whose members include Apple, Google and Microsoft heralded a new way to “white-box” a data center based upon freely available specifications and a set of contract vendors ready to implement based upon preset needs. It also heralded a new paradigm (now also seen in other industries) – you didn’t need to build and own it… to disrupt it (think Uber and Airbnb). And while many may falsely be led to believe the notion that this is primarily for developing countries in the light of the Facebook goal to connect the “un-connected”, I believe the impact will be for both – operators in developed and developing countries alike. There are a few underlying reasons here.

  • A key ingredient in building a world class product is to have a razor sharp focus towards understanding (and potentially forecasting) your customer/ user needs. While operators do some bit of that, Facebook’s approach raises the bar to an altogether different level. And in this analysis – one thing is obvious (something which operators also are painfully aware of); people have a nearly insatiable quest for data. A majority of this data is in the form of video; unsurprisingly video has become a major focus across all Facebook products – be it Facebook Live, Messenger Video calling or Instagram. The major difference between the developing and developed world in this respect is the expectations of what data rates to expect, and the willingness (in many cases the ability) to pay for the same.
  • In the 1st world nations, 5G is not a pipe-dream anymore and has operators worried about the high costs associated in rolling out and maintaining such a network (not to forget the billions spent in license fees). In order to recoup this, there may be an impulse to increase prices – which could swing many ways; people grudgingly pay; consumers revolt resulting in margin pressure – and lastly – people simply use less data. The last option should worry any player in the video game; what if people elect to skip the video to save costs!
  • Among the developing world, 5G may still be someway off, but here in the land of single digit ARPU’s, operators have a limited incentive for heavy investment given a marginal (or potentially negative ROI). On top of that there is a lack of available talent and heavy dependence on vendors – those whose primary revenue source comes from selling even more equipment and services; all increasing the end price to the consumer.

Facebook has it’s fingers in both these (advanced and developing market) pies and while its customers may be advertisers, its strength comes from the network effect created by its billion strong user base and it is essential to provide these with the best experience possible.

Capture

These users want great experiences, higher engagement and better interaction. Facebook also knows another relevant data point – that building, owning and operating a network is hard work, asset heavy and not as profitable. It has to look no further than Mountain View where Google has been scaling down its Fiber business. Truck rolls are frankly – not sexy.

It is not that operators were not aware of this issue, but traditionally lacked the knowhow needed to tackle this challenge which required deep hardware expertise. This changed with the arrival of “softwarization” of hardware in the form of network function virtualization. Operators were not software guru’s ….. but Facebook was. It recognized that by using this paradigm shift and leveraging its core software competencies it could potentially transform and potentially disrupt this market.

Operators could leverage the benefit of open source design to vastly drive down the costs of implementing their networks; vendors would no longer have the upper hand and disrupt the current paradigm of bundling software, hardware and services. Implementations could potentially result in a better user experience – with Facebook as one of the biggest beneficiaries. Rather than spend billions in connecting the world, it would support others to do so. In doing so, it would have access to infrastructure that it helped architect without owning any infrastructure. In short – it would be – the “un-network – uncarrier”.

AI – rescuing the spectrum crunch (Part 1)

April 4, 2016 1 comment

Chamath Palihapitiya, the straight talking boss of Social Capital recently sat down with Vanity Fair for an interview where he illustrated what his firm looked for when investing. “We try to find businesses that are technologically ambitious, that are difficult, that will require tremendous intellectual horsepower, but can basically solve these huge human needs in ways that advance humanity forward”.

Around the same time, and totally unrelated to Chamath and Vanity Fair, DARPA, the much vaunted US agency credited among other things for setting up the precursor to the Internet as we know it threw up a gauntlet at the International Wireless Communications Expo in Las Vegas. What was it: it was a grand challenge – ‘The Spectrum Collaboration Challenge‘. As the webpage summarized it – “is a competition to develop radios with advanced machine-learning capabilities that can collectively develop strategies that optimize use of the wireless spectrum in ways not possible with today’s intrinsically inefficient static allocation approaches”.

What would this be ‘Grand’? Simply because DARPA had accurately pointed out one of the greatest challenges facing mobile telephony – the lack of available “good” spectrum. In doing so, it also indirectly recognized the indispensable role that communications plays in today’s society. And the fact that continuing down the same path as before may simply not be tenable 10 – 20, 30 years from now when demands for spectrum and capacity simply outstrip what we have right now.

Such Grand Challenges are not to be treated lightly – they set the course for ambitious endeavors, tackling hard problems with potentially global ramifications. If you wonder how fast autonomous cars have evolved, it is in no small measures to programs such as these which fund and accelerate development in these areas.

Now you may ask why? Why is this relevant to me and why is this such a big deal? The answer emerges from a few basic principles, some of which are governed by the immutable laws of physics.

  • Limited “good” Spectrum – the basis on which all mobile communications exists is a finite quantity. While the “spectrum” itself is infinite – the “good spectrum” (i.e. between 600 MHz – 3.5 GHz) or that which all mobile telephones use is limited, and well – presently occupied. You can transmit above that (5 GHz and above and yes, folks are considering and doing just that for 5G), but then you need a lot of base stations close to each other (which increases cost and complexity), and if you transmit a lot below that (i.e. 300 MHz and below) – the antenna’s typically are quite big and unwieldy (remember the CB radio antennas?)
Spectrum - Sweet Spot

Courtesy: wi360.blogspot.com

 

  • Increasing demand – if there is one thing all folks whether regulators, operators or internet players agree upon it is this; that we humans seem to have an insatiable demand for data. Give us better and cheaper devices, cool services such as Netflix at a competitive price point and we will swallow it all up! If you think human’s were bad there is also a projected growth of up to 50 Bn connected devices in the next 10 years – all of them communicating with each other, humans and control points. These devices may not require a lot of bandwidth, but they sure can chew up a lot of capacity.
Capture

CISCO VNI

  • and as a consequence – increasing price to license due to scarcity. While the 700 MHz spectrum auction in 2008 enriched the US Government coffers by USD 19.0 Bn (YES – BILLION), the AWS-3 spectrum (in the less desirable 1.7/2.1 GHz band) auction netted them a mind-boggling USD 45.0 Bn.

One key element which keeps driving up the cost of spectrum is that the business model of all operators is based around a setup which has remained pretty much the same since the dawn of the mobile era. It followed a fairly, well linear approach

  • Secure a spectrum license for a particular period of time (sometimes linked to a particular technology) along with a license to provide specific services
  • Build a network to work in this spectrum band
  • Offer voice, data and other services (either self built) or via 3rd parties to customers

While this system worked in the earlier days of voice telephony it has now started fraying around the edges.

  •  Regulators are interested that consumers have access to services at a reasonable price and that a competitive market environment ensures the same. However with a looming spectrum scarcity, prices for spectrum are surging – prices which are indirectly or directly passed on to the customer
  • If regulators hand spectrum out evenly, while it may level the playing field for the operator it does nothing to address a customer need – that the capacity offered by any one operator may not be sufficient… leaving everyone wanting for more, rather than a few being satisfied
  • Finally, the spectrum in many places around the world remains inefficiently used. There are many regions where rich firms hoard spectrum as a defensive strategy to depress competition. In other environments there are cases when an operator who has spectrum has a lot of unused capacity, while another operator operates beyond peak – with poor customer experience. No wonder, previous generations of networks were designed to sustain near peak loads – increasing the CAPEX/ OPEX required to build up and run these networks.

In the next part of this article we will dive deeper into these issues, trying to understand how an AI enabled dynamic spectrum environment may work and in the last note point out what it could mean to the operator community and internet players at large…..

Google Fi…. a case of self service and identity

April 24, 2015 Leave a comment

The past couple of days have been quite interesting, with the launch of Google’s MVNO service. Although I still stick to my earlier stance, there are a few interesting nuggets which are worth examining. I will not go into examining the details of the service, you can find them all over the internet – such as here, but more look at a couple of avenues which are much less talked about – which I feel are very relevant.

The first is the idea of a carrier with complete self-service. Now self service is not rocket science – but when you look at legacy firms (and yes – i mean regular Telco’s) this does account for a LOT. In the current ecosystem of cost cutting – one of the largest areas you can reduce costs is in head-count. To give you an idea what this means for a US Telco – one only needs to read news around the consolidation of call centers from both T-Mobile and Verizon. We are not talking of 100 – 200 people here, we are literally talking about thousands of people! And people cost money – and training, and well can even indulge in illegal activities like selling your social security details! If you have a very well designed self service – you could do away with most (I daresay say even all) of them and voila – you have a lean service which meets a majority of your customer needs. In this case, although you end up buying wholesale minutes/ data from operators – you have none of the hassles of operating a network, and certainly none of the expensive overhead such as a call center. No wonder you can match price with the competition – and perhaps do so at an attractive margin! Not sure if Google already has this in place, but I believe it does have the computing and software wizardry to accomplish this.

The next is in the concept of a phone number as your identity. This has been sacrosanct to operator till now – and Google has smartly managed to wiggle its way in. Now it really doesn’t make a difference what device you use – all your calls will be routed via IP – so the number is no longer “tied” to your device. Maybe in the future – your number would be no longer relevant, maybe it would only be an IP address. What would be important is your identity. Simply speaking, perhaps you could go to any device and login with your credentials – and you see the home-screen as it configured for you.. irrespective of device.

However, here I find it difficult to understand Google’s approach. For a unique identity they chose to go with Google Hangouts…. and although I don’t know usage figures – I doubt if this is a very “popular” platform to begin with. Is this Google’s way of trying to push the adoption of Google Hangouts, especially since most of its efforts around social have proven to be a lot of smoke without fire? Will this approach limit its appeal to the few who are Hangout supporters, is this the first salvo in a wider range of offerings, or will this be a one trick pony to rocket Google on the social map … only time may tell.

But one thing is certain, such services will serve to force operators to keep optimizing – using techniques such as self service to reduce costs and offer even better solutions to their customers – at better price points. Maybe in a way it is not unlike Google Fiber – it serves to push the market in the right direction rather than be a game changer on its own. That in itself – would be worthy of applause.

Net Neutrality – let the “Blame Games” begin

netflix-is-shaming-verizon-for-its-slow-internet

The picture stared right back at me – it was hard to miss, with a bright red background and letter’s clearly spelled out  – “Netflix is slow, because the Verizon Network is congested“. It is hard to pass a few days without yet another salvo thrown in the latest war against net neutrality. No sooner does one side claim victory, the other party files a counter-claim. This is not only a US phenomenon, but has also spread to Europe; regulators now have their backs firmly against the wall as they figure out what to do.

What is a pet peeve of mine is that although there are good, solid arguments on both sides – among the majority of comments there seems to be a clear lack of understanding of the whole situation as it stands. I am all for an open and fair market, an ardent supporter of innovation – but all this needs to be considered in the context of the infrastructure and costs needed to support this innovation.

A few facts are certainly in order

  • People are consuming a whole lot more data these days.  To give you an idea of what this is – here are some average numbers for different data services
    • Downloading an average length music track – 4 MB
    • 40 hours of general web surfing – 0.3 GB
    • 200 emails – 0.8 GB, depending upon attachments
    • Online radio for 80 hours per month – 5.2 GB
    • Downloading one entire film – 2.1 GB
    • Watching HD films via Netflix – 2.8 GB per hour
  • Now consider the fact that more and more content is going the video route – and this video is increasingly ubiquitous, and more and more available in HD… this means
    • Total data consumption is fast rising AND
    • Older legacy networks are no longer capable or equipped to meet this challenge

Rather than to dive into the elements of peering etc for which there are numerous other well written articles I would like to ponder on three aspects.

  1. These networks are very expensive to build out – truck-rolls to get fiber to your house are expensive… and
  2. Once built out – someone has to pay
  3. and finally – firms are under increasing pressure from shareholders and the like to increase revenues and profits…

This simply means that if the burden was all on the consumer – he would end up with a lighter (I dare say a much lighter) wallet. It would be similar to a utility – the more you consume, the more you pay. It would be goodbye to the era of unlimited data and bandwidth (even for fixed lines).

Now, I totally get the argument when there is only one provider (a literal monopoly) – then the customer is left with no choice. This is definitely a case which needs some level of regulation to protect the customer from “monopoly extortion”. Even when regulators promote “bit-stream access” – i.e. allowing other parties to use the infrastructure – there is a cost associated with the same. Hence this is more of a pseudo-competition on elements such a branding, customer service etc rather than on the infrastructure itself. The competitor may discount, but at the expense of margins. There always exists a lower price threshold which is agreed with the regulator. Other losers in such a game are those consumers who live in far flung areas – economics of providing such connections eliminate them from any planning (unless forced by the government). In such a case – it becomes a cross-subsidy, with the urban, denser populace subsidizing the rural community. However, these can be served by dedicated high speed wireless connections and in my opinion do not present such a pressing concern.

As everyone is in agreement, the availability and access to high-speed data at a reasonable cost does have a direct and clear impact on the overall economy of a country. If we do not want to continue increasing prices for consumers in order to keep investing in upgrades to infrastructure, who then should shoulder this burden? Even though it would undoubtedly be unfair to burden small and emerging companies by throttling services etc, given on how skewed data traffic is with a few providers (e.g. Netflix etc) consuming the bulk or the traffic – would it be fair they bear a portion of this load? After all, these firms are making healthy profits while bearing none of the cost for the infrastructure.

These aren’t easy questions to answer – but need to be considered in the broader context. One extreme may lie in having national infrastructure networks – but this is easier said than done. A better compromise may be to get both sides to the negotiating table and involve the consumer as well, each recognizing that it is better to bury the hatchet and work out a reasonable plan rather than endless lawsuits.

Once this is accomplished – the Blame Games would end; and hopefully the “Innovation” celebrations would commence!

Partnering at large corporations – secrets to succeed!

Image

In the era of consolidation following the economic crises, many firms turned towards a ‘protect the core’ approach. One often adopted approach was the slashing of R&D budgets and consequently a diminishing pipeline of new products. These now lean companies are facing a new problem – what do you do when your core itself is shrinking! This is being fueled by rapid technology changes which allows substitutes to enter the market, many with a compelling service offering, and many times free of cost!

Given an option to either make (i.e. reinstate the development that you just fired), buy (who knows if this is a hype or will work?) – many large firms are turning to the third, the partner option. At the highest level this would seem simple – there are no integration headaches, no large in-house development costs; but many are realizing this is not as easy as it seems. This is best illustrated by a simple analogy, ‘have you ever seen an elephant dance with a mouse?‘. Nonetheless, rather than give up on what is and should be a clear cornerstone of new products offerings to keep customers happy, large companies can make some concrete steps in this direction.

  • Make sure that the partnering and venturing team has the full backing of the management. This simply boils down to the fact that partner products should always be evaluated on par with similar internally developed offerings – no step-motherly treatment here.
  • Ensure that the team understands the partner mindset. At best this would also mean that a few members if not all have past experience in such an environment. It is quite difficult to understand the challenges, needs and wants of small firms if you haven’t quite done it yourself.
  • Do not subject your partner to the internal functions, realpolitik and bureaucracy in your organization. Instead work hard to make your own internal processes lean, efficient and less cumbersome. Best way to look at it – treat the partner as you would a customer; keep him happy! It is amazing how beneficial this could be to other divisions in your organization as well.
  • Invest in being able to effectively manage multiple partnerships. Managing the first 3 – 10 are easy, managing 50 is another story altogether. This does not imply additional layers of complexity – but recognize the need to be abreast of and manage partner needs. The Microsoft way – i.e. categorize partners based upon a few select KPI’s (e.g. revenue, strategic interest etc) and then accordingly assign resources to them is quite effective in this respect. Have to give them credit here – they do know how to manage (without strangling) their partners to a good extent.
  • And with all partners – do not be afraid of failure; monitor the progress, recognize if and when it is failing – and move on. Do not throw good money after bad.
  • Finally – be patient. The above points may sound simple but do need some time to fall in place and for you see tangible results. Give yourself the time, and make management aware of the same. Last thing you need is the fruits of your labor falling apart halfway through because unrealistic short-term expectations have been set!

The disappearing SIM

October 4, 2012 1 comment

If there is one piece of telecom real-estate that the operator still has a strangle hold on then it is the end customer – and the little SIM card holds all the data of the customer. Interestingly enough, that as the smart-phones continue to grow in size (some of them would definitely not fit in my pocket anymore) the SIM card appears to be following a different trajectory all-together. Apple again has been the undisputed leader in this game with the introduction of first the micro-SIM and now the nano-SIM.

 Image

Reproduced from Wikimedia Commons

To be clear here, currently this has not led to any dramatic impact upon the capabilities of the SIM. Advances in technology continue to ensure that more and more information can be squeezed into a smaller and smaller footprint. And although there have been claims that the main purpose was elegance in design and to squeeze more in the limited space that exists I do wonder if there is a long term strategic motive in this move.

Let us revisit the SIM once again; in the broader sense it is a repository of subscriber and network information on a chip. In the modern world of the apps, cloud services and over-the-top updates is there any real USP of having an actual piece of hardware embedded in the phone? This is not a revolutionary thought – in fact Apple was rumored to have thought about it a couple of years ago – and was working towards a SIM-less phone. This concept is not new – some interesting notes about how this could look like can be found back in 2009. From what I gathered was that at that time a strong united ruckus from the carriers dissuaded them from continuing the path.

Two years have passed since then, and you continue to see continued margin pressures from the operators while folks such as Apple and Samsung continue to be the darlings of Wall Street, and well – the SIM card gets smaller and smaller. How does that impact the operator – well for one, if the operators continue to bank on the actual chip as their own property perhaps this advantage would continue to slip away. This is increasingly important since although the SIM could get smaller to accommodate the traditional functions of a SIM – if you then add in other potential functionalities such as payment, security etc then at some point you do need additional real estate to include these functionalities. And at that time if the SIM is just too small, and there are a large number of handset manufacturers (who customers want) who design to this nano spec then well – the operator is out of options.

However, this ‘creative destruction’ as coined by Schumpeter is perhaps the trigger that the telecom operators need in the first place. The E-SIM is perhaps not the end of the game for them, but the advent of a new start. This would be one where software triumphs over hardware – where functionalities are developed and embedded upon multiple operating systems and customized based upon the each individual device and user capabilities. Here rather than mass producing SIMs and having processes to authorize and configure systems users could take any phones and get registered over the air, have capabilities dynamically assigned and configured to suit their specific need. In some ways – it would be the era of mass customization, which can be done efficiently, easily and seamlessly.

It would definitely be a different ball-game, and I can only speculate which way the ball would spin – but perhaps it would be in their own interests for operators to willingly embrace such a transformation rather than push back – the opportunities might simply outnumber the risk.