Smart IOT London

15 - 16 MARCH 2017 EXCEL, LONDON

Co-Located With:

  • Big Data World
  • CSE Logo
  • Cloud Expo Europe
  • Data Centre World

THINGS, IN ACTION

Wherever you are on your IoT journey, Smart IoT London is the only place to be

Industry News

The Stack

Constellation Research

  • By Andy Mulholland

    Any description of the Digital Services Economy will focus on the development of marketplaces through hyper connectivity, and massive data flows. Those enterprises that become competitive winners have learnt how to read and respond in an optimized manner to new business opportunities. The Digital Enterprise Business model demands new levels of Enterprise activity integration. To gain external Agility in responsiveness requires deep [S1] insights into internal operational performance in near real-time as well as the ability to orchestrate the optimum response. Clouds based Services and Apps are reality, IoT sensing is gaining ground, and now AI is coming; deployed together in an integrated solution these technologies transform the Enterprise capability.

    Whether the context is human, machine or computer, increasing the richness of any ‘Network’ has always led to improvements in business outcomes, and operational Enterprise efficiency. From 2000 onwards the Internet has driven the transformation of business starting with the Web, moving to the use Clouds, and Apps. Now the Internet of Things, with its implications to providing the Digital World that AI requires, is literally connecting these changes together.

    It was the same 25 years ago when the individual technologies of the PC, Ethernet and Client-Server, even eMail, combined to form ERP enterprise applications transforming the basis for business competitiveness. It is easy to recognize the value of ERP, with its data subsequently enabling the development of BI, and over look that it’s catalyst was a new business model termed Business Process Re-Engineering, or BPR.

    Business Process Re-Engineering defined a new, and highly competitive, Business Operations model by showing how to redesign Business Operations by incorporating these technologies. As BPR/ERP early adopters transformed the competitive dynamics of the market place, late adopters were forced to play catch up to compete, even survive. It’s no coincidence that this period corresponds to the sharpest increase in the rate of corporate failure.

    In 2017 the leading management consultants are united in their vision of the new Digital Economy created by the combination of Internet centric technologies collection. Less clear is the detail defining the formation of a connected online, dynamic, Enterprise capable of operating to win business through semi autonomous AI based decisions. An Enterprise that will be using IoT sensing to render the physical world in Digital form, and gaining the benefit of Cloud based resources as on-demand Services.

    Sadly the skill to achieve this from the disparate technology and products is lacking, and for most Enterprises is a serious concern holding back their strategy.

    Enterprise Management knows the risk, and expense, of achieving this through a custom deployment is high, it is even difficult to have enough knowledge to establish the right achievable business requirement.  The desirability of a ‘packaged’ solution that both establish the business case, as well as ensuring the outcome is safely delivered is obvious.

    But the market is maturing and now two leading IT Technology vendors and one Solution Integrator, (the IoT equivalent of a System Integrator), are offering well integrated solution portfolios that avoid much, if not all, of the identifiable risk. Each solution Portfolio covers an Enterprise activity in a comprehensive manner, but each focuses on a different aspect of an Enterprise business model. The following brief outline of each draws attention to these significant changes in the IoT business market. (see footnote on selection of three vendors).

     

    1. SAP

    SAP positions IoT as a further stage in Enterprise operating efficiency integrating with SAP ERP, Business Intelligence and S4/HANA. The value of data flow integration through the ‘real time’ in memory rapid processing of the latest upgraded S4/HANA is a key aspect in the SAP architecture. SAP initially built in house expertise in utilizing IOT sensing to extend the range, and types, of data used in SAP vertical sector ERP solutions, (example). Then in the autumn of 2016 SAP announced a 2 billion euro major program of investment including the acquisition of Plat.One with its proven IoT Platform as an acceleration of its IoT activities. 

    The result was the introduction of the SAP Leonardo a comprehensive portfolio of IoT capabilities each aimed at improving an individual Enterprise activity, but complete with integration architecture comprising of three major elements to ensure across the Enterprise integration and business value;

    SAP Leonardo Bridge combines real-time information from connected things with business processes through a range of packaged enterprise end-to-end solutions for connected things from products to people across line-of-business and industry use cases

    SAP Leonardo foundation best of breed business services to rapidly build IoT applications including digital twins, reusable application services, as well as applying predictive algorithms, all running on the SAP Cloud Platform.

    SAP Leonardo for Edge Computing manages data collection, and offers edge based processing if required, managing connectivity, latency, and device protocols.

    To encourage Enterprises to take advantage of this integrated environment SAP offers the Leonardo Jump Start Program with fixed time frames and costs to achieve the selected solution outcome.

    2) Salesforce

    Established as the major innovator in the provisioning Cloud Based Business Services focused on maximize Enterprise sales and revenue, Salesforce fully incorporates the Data from IoT devices and sensing to drive measurable outcomes. Salesforce IoT Cloud is a specialized Cloud based set of capabilities architecturally integrated with the other Salesforce Clouds. An approach that allows IoT data to be combined with any other data, rules or processing actions that form any Salesforce Business [2] outcome.

    IoT Cloud is a platform transforming connected products into engaging customer experiences. With partners providing the capability to manage at massive scale the connectivity and data collection/collation originating from IoT devices and sensors, IoT Cloud marries customer context to IoT data to enable real-time customer engagement. The processing capabilities are, in common with other Salesforce Clouds, provided by Salesforce Thunder, termed as a "massively scalable real-time event-processing engine." Salesforce Thunder provides a common processing service using definable state based Business rules that allows IoT data to be used in conjunction with other non-IoT data to trigger events. Salesforce plans to add Einstein AI capabilities to further extend the business value.

    Salesforce aim to provide an integration between all data inputs, now extended to include IoT, existing data, established business rules and AI created relationships to achieve the maximum impact in managing customer experiences. The Salesforce ‘as a service’ deployment model encourages a low cost and low risk adoption path.

    3) Capgemini

    A Solution Integrator for IoT and 3D platforms is an obvious role, often claimed as a simple extension of Systems Integration. Though the concept of combining ‘best of breed’ elements into a strong customized offering is similar, there are substantial differences. Few Systems Integrators have the depth of both business and technology skills in Digital Markets, combined with the width of Enterprise Business activity understanding, to not only compete, but to develop an Enterprise transformation portfolio.

    Capgemini’s Digital Manufacturing Services Portfolio requires the support of their extensive network of Capgemini experts working with a wide range of partnerships with leading IT vendors in its ecosystem. The result is a cohesive, integrated portfolio of offerings that redefine Product and Asset Management, and Manufacturing Operations from beginning to end to suit the Digital Economy. The individual offerings are part of a range of Capgemini ‘Ready2Go’ prebuilt business solutions, such as Digital industrial Asset Lifecycle Management (DiALM) and eObjects IoT Platform, all of which are part of fully integrated processes across an Enterprise.

    The Capgemini Portfolio approach enables Manufacturers to start by improving activities initially selected to deliver the highest business value, whilst being able to continue their transformation by adding more activities always secure in the ongoing integration.

    Summary

    Many Enterprises have been reluctant to invest in IoT enabled Digital Transformation fearing that the market was too immature, that project failure or expensive over runs were likely, or even that the initial investments would turn into technology dead ends restricting future options.

    SAP, Salesforce and Capgemini are all offering ‘easy start’ entry into significant Enterprise Business activity solutions. Interestingly, each has used their expertise to develop a different focus, thus incidentally proving how wide the overall transformation of the market will become. Existing Customers of any of these three vendors should seize the opportunity to discuss their options to start a lower risk high Business value project.

    Footnote; Constellation has identified these three vendors on the basis on market interest and client enquiries, and there selection does not form either a recommendation, nor imply that other vendors do not have competitive offerings.

  • By Chris Kanaracus

    Constellation Insights

    The concept of indirect access to SAP systems has been a thorn in the side of customers for years, and now a major court victory for SAP stands to bring the issue to a head. A judge in the UK's High Court has ruled that drinks distributor Diageo must compensate SAP—to to possible tune of nearly $68 million—in connection with two Salesforce-based systems that accessed data in its core SAP system.

    Several years ago, Diageo developed two systems based on Salesforce software called Gen2 and Connect, according to the judge's ruling. The first helps sales and service representatives manage their affairs, while the second gives customers the ability to place orders on their own, rather than through a call center.

    Diageo also licenses SAP Process Integration, which it has used to connect Gen2 and Connect with its core SAP system. Although the applications' end users don't directly interact with SAP, the company contended that it was also entitled to named user licenses for those end customers—all 5,800 of them. 

    SAP argued that its contract with Diageo did not entitle any non-named users to access its ERP system, and that while Exchange Infrastructure is a "conduit for messages between systems, it does it does not and cannot perform the roles of the systems between which it provides connectivity," according to the ruling. The contract also didn't have any language carving out an exception for PI, it adds.

    In turn, Diageo argued that PI is a "gatekeeper license" for accessing SAP and that its usage of it with Gen2 and Connect was therefore covered under the agreement.

    But Justice O'Farrell disagreed, writing in her decision:

    I reject Diageo's submission that SAP PI is a "gatekeeper" licence for gaining access to the SAP suite of applications and database. The Exhibit contains a separate basis of pricing for the SAP PI software engine and adapters, which applies even where there is a Named User licence for mySAP ERP: "Usage of one or more of these software engines is not included as part of the standard mySAP Business Suite Named User licence and is subject to an additional charge in each case" (paragraph 2b) of the Exhibit). Therefore, it is clear that it is an addition, rather than an alternative, to authorisation under a Named User licence.

    It also appears that Diageo's attempt to provide more convenience for its customers and sales representatives unwittingly influenced the judge's ruling. 

    Diageo relies on the fact that, prior to the introduction of Connect, customers were required to place orders through call centres. SAP has never contended that such customers should be Named Users. The Connect portal simply allows the customers to place orders directly into the system, rather than through an intermediary. However, that argument does not assist Diageo. When an order is placed through a call centre, there is no interaction between the customer and the mySAP ERP software. The individual submitting the order to the mySAP ERP system is the call centre operative. The call centre operatives are Named Users. It follows that if the individual submitting the order to the mySAP ERP system is the customer, that customer should be a Named User.

    SAP had sought £54,503,578 ($67.8 million) in damages, but the actual amount Diageo will be ordered to pay won't be calculated until a later date. By way of contrast, Diageo paid SAP between £50 million and £61 million for regular maintenance fees and services through November 2015. 

    Analysis: A Wakeup Call for SAP Customers

    While indirect access is far from a new issue for SAP customers, the UK court ruling should stand as a wakeup call for any who haven't done due diligence concerning their potential exposure to this issue.

    SAP has long used license audits to pressure customers into making payments for what it determines is illegal indirect access of its software. Constellation sides with the generally accepted industry parameters of indirect access, which should include the ability to process batch data; aggregate information into a data warehouse or other data store; access data for use in another system via data integration; and to enter data from a third party system.

    Based on that, it does not appear Diageo broke any written or unwritten rules. Diageo has the right to appeal the judge's decision—and one would expect they will. But for now, the ruling may embolden SAP to seek further litigation around indirect access not only in the UK but other territories as well. 

    Constellation believes that overreaching pursuit of additional money for indirect access fees is anti-customer, and at odds with the friendlier, easier-to-work-with image SAP has sought to project under the leadership of CEO Bill McDermott.

    It's also counterproductive in the long run, as more viable options to move off SAP ERP emerge from competitors. And needless to say, it's horrible for a vendor's public relations and at odds with today's forward-thinking enterprise IT shops, which are looking to generate innovation and efficiencies through the combination of systems from multiple vendors—as Diageo did.

    Of course, SAP is entitled to protect its intellectual property. And while Constellation has received a consistent uptick in client requests related to SAP indirect access claims, it's far from guaranteed the company will use the UK decision as a launching pad for a wave of new lawsuits. 

    But a number of things need to happen in the wake of this ruling. First and foremost, customers should examine their usage of SAP with third-party applications and compare that with contract terms to determine their potential exposure and figure out a proactive fix.

    Second, user groups should engage in a dialogue with SAP in order to gain a clear-cut agreement on exactly what indirect access does and doesn't allow. Frankly, it's often pretty murky and malleable from customer to customer, and this has only led to no good. Constellation will be tracking this topic even more closely in the weeks and months ahead.

    24/7 Access to Constellation Insights
    Subscribe today for unrestricted access to expert analyst views on breaking news.

  • By Doug Henschen

    IBM Machine Learning for z/OS could be a boon to big banks and insurance companies that want advanced analytics on the mainframe. Next up is the IBM Power platform.

    Public cloud providers have popularized machine learning with low-cost, easily accessible services, but that’s a separate world from the tightly regulated, on-premises computing environments maintained by many big banks and insurance companies. Now IBM is bringing cutting-edge analytics to these mainframe customers with IBM Machine Learning (IBM ML) for z/OS.

    Announced February 15 in New York, IBM ML is a private-cloud-only offshoot IBM Watson Machine Learning , the public-cloud service on IBM Bluemix. More than 90 percent of data still resides in private data centers, according to IBM, and the company is in a unique position to bring the latest in analytics to these environments starting with the IBM mainframe.

    Thousands of companies still rely on IBM System z mainframes, including 44 of the top 50 global banks, 10 out of 10 of the largest insurers and 90 percent of the world’s airlines. These organizations have been among the most conservative about moving their core transactional applications to new platforms. That does not mean, however, that they are not interested in taking advantage of advanced analytics.

    IBM Machine Learning for z/OS will bring transaction-time analytics to the mainframe
    environments still heavily used by big banks and insurance companies.

    Heretofore the likes of big banks and insurance companies have used sampling methods or batchy, bulk-data-movement to support predictive analytics. Hadoop-based data lakes, for example, are often used for customer 360 and risk analyses, and machine learning is increasingly popular in that role. But these approaches introduce data-movement costs, human-intensive manual worksteps and latency. The ideal in analytics, and the goal with IBM ML for z/OS, is to bring the analytics to the data rather than moving the data to a separate analytics environment. IBM ML for z/OS relies on an external X86 server and z Integrated Information Processors (zIIP coprocessors), so it doesn’t impact production performance or increase (expensive) mainframe processing cycles.

    IBM ML for z/OS has been in beta since October, says IBM, and 20 organizations have been part of the beta program. Most of those organizations are banks and insurance companies, and many are seeking an alternative to rules-based and table-based systems that provide more primitive and brittle predictive capabilities. With machine learning applied directly to data in the mainframe environment, IBM ML promises more accurate, customer-specific prediction and, therefore, more extensive automation at the time of the transaction.

    American Federal Credit Union, one IBM ML beta client, currently automates 25 percent of lending decisions while the remaining 75 percent go to underwriters. IBM says early testing for American Federal showed that IBM ML promises to automate 90 percent of the workload that would otherwise go to underwriters. Another beta customer, Argus Health, is using IBM ML for z/OS to apply and continuously update models and scores against payer, provider, and phama-benefits data in order to predict outcomes and improve the effectiveness of treatments. Banks and insurance companies have been the first in line for IBM ML for z/OS, but IBM expects airlines to use the system for applications including predictive maintenance.

    IBM says it intends support analytics with a choice of languages, frameworks and platforms. At launch IBM ML for z/OS is based on Scala and uses the Spark ML library, but there are plans to support R, Python, TensorFlow and other languages and libraries. To make life easier for developers, IBM ML for z/OS includes and optimized data layer built by Rocket Software to connect to mainframe sources such as DB2, VSAM, ISM as well as non-mainframe data sources. In a demo at the announcement event, an IBMer correlated data from the cloud-based Twitter Insights service on IBM Bluemix with transactional records on the mainframe to support customer churn analysis.

    IBM ML includes the company’s Cognitive Assistant for Data Science (CADS), which automates the selection of best-fit algorithms for the modeling scenario at hand. IBM’s software also includes model-management and governance capabilities that are essential in regulated environments.

    Beyond adding support for more languages and machine learning libraries, the next big step for IBM ML will be support for the IBM Power platform, which also supports workload that tend to remain in private-cloud environments. IBM last November announced PowerAI Suite software for a high-performance-computing-specific IBM server that pairs Power8 chips with NVidia Graphical Processing Units (GPUs). The combination supports machine learning libraries such as Caffe, Torch, and Theano and last month added Google’s hot TensorFlow deep learning framework to the mix. IBM ML support would add CADS for automated algorithm selection as well as IBM’s model-management and governance capabilities.

    The roadmap for IBM ML calls for more choices of languages, machine learning and
    deep learning libraries and support for IBM’s Power Systems platform.

    MyPOV on IBM ML

    It only makes sense for IBM to bring its latest analytical capabilities to System z and Power customers. Whether those customers have already turned elsewhere for predictive capabilities, and whether IBM ML for z/OS, or when available, IBM ML for Power, are better alternatives are separate questions. Analytic latency and data movement at high scale are both undesirable. But to what extent have companies already offloaded historical data from the mainframe onto lower-cost platforms? That would have a big impact on the accuracy and appeal of IBM ML. And to what extend are prospects relying on hard-to-maintain rules-or table-based systems if they’re not using more advanced forms of prediction?

    Applying prediction at the transaction is clearly desirable, but companies including IBM have offered answers to this challenge before. To beat out the options already in place, IBM ML must offer lower latency, more accurate predictions, a higher level of automation, lower total cost of ownership or all of the above. IBM had a lot to say about lower latency and, through automated best-fit algorithm selection, better accuracy. We’re looking forward to conversations with early adopters to hear their take on the advantages of IBM ML over alternative routes to predictive insight.

    Related Reading:
    Spark Gets Faster For Streaming Analytics
    Virginia Tech Fights Zika With High-Performance Prediction

    NRF Big Show 2017 Spotlights Data-Driven Imperatives

  • By Chris Kanaracus

    Constellation Insights

    A European industry group composed of cloud infrastructure providers has released a 41-page "code of conduct" that seeks to assure customers their data is being protected in accordance with standards set by government rules. Here's the gist from the Cloud Infrastructure Services Providers in Europe (CISPE)'s announcement:

    The CISPE Data Protection Code of Conduct provides a data protection compliance framework that makes it easier for customers to assess whether cloud infrastructure services being offered by a particular provider are suitable for the processing of personal data.

    Cloud providers adhering to the Code must give customers the choice to store and process their data entirely within the European Economic Area. Providers must also commit that they will not access or use their customers' data for their own purposes, including, in particular, for the purposes of data mining, profiling or direct marketing.

    Companies declaring compliance with the CISPE Code of Conduct requirements represent a group of leading cloud infrastructure providers operating in Europe: Amazon Web Services (AWS), Aruba, DADA, Daticum, Gigas Hosting, Ikoula, LeaseWeb, Outscale, OVH, Seeweb, SolidHost and UpCloud, with more to be announced soon. 

    The framework set out by the CISPE adheres to both Europe’s existing Data Protection Directive as well the EU's General Data Protection Regulation, which comes into effect in May 2018, according to the group. 

    Overall, the code of conduct is a proactive and welcome move but one that should be viewed with the right amount of skepticism. It is an industry-led effort, after all, and therefore is as much about marketing as it is about complying with data-protection laws. (Indeed, members of the group are entitled to use a compliance mark on their websites and other materials showing their adherence to the code of conduct.)

    Secondly, the full document includes key passages such as this one:

    The Code is a voluntary instrument, allowing a CISP to evaluate and demonstrate its adherence to the Code Requirements for one or several of its services. This may be either (i) certification by an independent third party auditors, or (ii) self-assessment by the CISP and self-declaration of compliance.

    Certification by a third party is highly preferable to "self-assessment" and "self-declaration of compliance." Customers who engage with providers bearing the CIPSE code flag should check to see if their bona fides have been checked out by the former. 

    Notably missing from the initial list of participants is Microsoft and IBM. The former has staked out a sizable claim in Europe for its Azure cloud services, and has sought unique differentiation with arrangements such as that with Deutsche Telekom. DT runs the centers and Microsoft can only get access to customer data with permission from DT or the customer. 

    In any event, the code of conduct is well worth a careful read whether you're in Europe or other parts of the world, as it contributes to the ever-important discussion about data privacy and sovereignty in a rapidly changing world.

    24/7 Access to Constellation Insights
    Subscribe today for unrestricted access to expert analyst views on breaking news.

     

  • By Chris Kanaracus

    Constellation Insights

    IBM made headlines in 2015 when it committed to spend $3 billion on its IoT (Internet of Things) strategy over the next several years, and this week reached a milestone in that journey with the opening of a new Watson IoT global headquarters in Munich this week. The $200 million center will provide a base for IBM researchers and partners, as well as give customers the ability to test-drive IBM's IoT platofrm, as Watson GM Harriet Green said in a blog post:

    The work we will do will be some of the most advanced in the industry. Like the work we’re doing withAirbus and Schaeffler, using digital twins to transform their production process, from the design phase all the way through to their maintenance and servicing.

    The concept of digital twins is the idea that through IoT data, you can create a complete digital representation of a physical object; a car, a jet engine, or a building, for example.

    We can use these representations to understand and manage complex systems more quickly, more intimately. But to date, most companies have used digital twins for narrow, limited applications.

    Some use them as an engineering solution – helping design the next generation of connected products. Others use them to improve operational processes like maintenance around a connected product. But at IBM, we see these digital twins spanning the entire product lifecycle – from designing to planning to testing to building to maintaining to servicing.

    Some 1,000 IBM IoT researchers will be based at the Munich facility. The opening concides with IBM's Genius of Things events, which also provided an occasion for IBM to showcase a wide range of customer case studies. That's what's been lacking in IBM's IoT strategy to date, says Constellation Research VP and principal analyst Andy Mulholland.

    "Certainly IBM has, made a strategic commitment to investing in IoT, but actual detailed announcements on products, case studies and the like were not numerous," Mulholland says. "Today is the day that IBM Watson IoT suddenly goes really public with a slew of announcements, partners and customers all in one huge event aimed to show exactly what IBM's IoT division has been doing for the last two years."

    "Its all here," Mulholland adds. "A strategy, points of view on the use of IoT and cognitive computing, and a set of capabilities in the form of products and applied solutions. If you are interested in the application of IoT to enterprises then IBM has just made available a lot of information that will be worth studying."

    IBM also made a slew of news announcements at the event, including partnerships with Visa, Bosch, Nokia, Seebo and Vodafone, and customer wins such as SNCF French National Railway and elevator manufacturer KONE. 

    The partnership with VISA, which you can read about in detail here, is of particular interest. It will use Visa's Token payment services in conjunction with Watson IoT and allow any connected device to become a point-of-sale system—think beyond smart fridges that let you order more eggs and milk, to a fitness tracker device from which you can buy new running shoes or energy drinks. 

    Big Blue's big bet on IoT is part of its overall effort to drive revenue in new technology areas as its traditional storage and server businesses have suffered declines. The new center, along with the emergence of compelling case studies and partnerships will only help it tell a more convincing IoT story to customers.

    24/7 Access to Constellation Insights
    Subscribe today for unrestricted access to expert analyst views on breaking news.

  • By Chris Kanaracus

    Constellation Insights

    Nokia is betting it can be a player in IoT by offering enterprises a single place to acquire a global IoT networking footprint. Here are the key details from its announcement at Mobile World Congress:

    Nokia WING will manage the IoT connectivity and services needs of a client's assets, such as connected cars or connected freight containers, as they move around the globe, reducing the complexity for enterprises who would otherwise be required to work with multiple technology providers.

    Connectivity is enabled by intelligent switching between cellular and non cellular networks. For example, a shipping container linked by satellite in the ocean could switch to being connected by a cellular network near a port.

    Nokia will offer a full service model including provisioning, operations, security, billing and dedicated enterprise customer services from key operations command centers. The company will use its own IMPACT IoT platform for device management, subscription management and analytics. Nokia IMPACT subscription management for eSIM will automatically configure connectivity to a communication service provider's network as the asset crosses geographical borders.

    Communication service providers can quickly take advantage of new business opportunities that will be made available by joining a global federation of IoT connectivity services. By leveraging their excess network capacity they will be able to serve enterprises that require near global IoT connectivity, rapidly and with little effort, to realize new revenue streams. 

    Nokia also plans to offer WING as a white-label product telcos and ISPs can use to create their own branded services. 

    WING arrives at an interesting time for the IoT market, says Constellation Research VP and principal analyst Andy Mullholland.

    "There are starting to be questions as to why some analysts' predictions of millions of interconnected IoT devices within a couple of years hasn't happened," Mulholland says. "My simple reply is that the supporting telecommunications infrastructure offering the right services at the right price is still largely lacking. This announcement from Nokia puts another building block in place technically, but together with other technology elements such as LoRa it still has to be rolled out by telecoms." 

    "We seem to have the chicken and egg problem as to which comes first," he adds. "Is the lack of suitable infrastructure holding back demand, or is the demand not there for these new services? Meanwhile, the Intranet of Things continues to be rolled out within Enterprises in support of operational improvement."

    Have a few minutes to spare? Take Constellation's CIO Priorities SurveyConstellation will send you a summary of the results. 

     
  • By Chris Kanaracus

    Constellation Insights

    Google has made a long-anticipated move with the beta launch of Cloud Spanner, its globally distributed relational database that has powered many of its mega-scale consumer services for years. Here are the key details from Google's announcement:

    When building cloud applications, database administrators and developers have been forced to choose between traditional databases that guarantee transactional consistency, or NoSQL databases that offer simple, horizontal scaling and data distribution. Cloud Spanner breaks that dichotomy, offering both of these critical capabilities in a single, fully managed service.

    Cloud Spanner keeps application development simple by supporting standard tools and languages in a familiar relational database environment. It’s ideal for operational workloads supported by traditional relational databases, including inventory management, financial transactions and control systems, that are outgrowing those systems.

    With Cloud Spanner, your database scales up and down as needed, and you'll only pay for what you use. It features a simple pricing model that charges for compute node-hours, actual storage consumption (no pre-provisioning) and external network access. 

    For regional deployments, Spanner costs $0.90 per node per hour, with $0.30 per GB of storage per month. There are also charges for network egress. Multi-region pricing will be released soon. 

    One early customer kicking Spanner's tires is supply-chain software vendor JDA, which sees Spanner as ideal for handling massive amounts of IoT data while providing high availability. 

    While a newly released service, it seems safe to say Spanner has already been battle-tested at the highest levels. Internally at Google, it handles tens of millions of queries each second, and powers the likes of AdWords. 

    Google has come up with simple and elastic pricing for Spanner and there are clear use cases for it, says Constellation Reseach VP and principal analyst Doug Henschen. "Spanner uniquely delivers global scalability with consistency for demanding financial services, advertising, retail and supply chain applications requiring synchronous replication," he says. "If there’s one weakness, it’s that Cloud Spanner does not support complicated ormultiple simultaneous reads and writes within single transactions. Still, it uniquely offers the always-available traits of scalable NoSQL options such as Cassandra but with the strong consistency of traditional relational databases."

    It's worth noting that Spanner is the inspiration for CockroachDB, an open-source database being developed by a number of former Google employees. CockroachDB is still in beta but the startup has been working on version one for a few years now. Its not clear how CockroachDB will fare against Cloud Spanner, given the engineering and marketing resources Google can bring to bear, but the presence of an open-source alternative is a welcome one and could see parent company Cockroach Labs become a tantalizing acquisition target for Google's competitors in cloud infrastructure.

    Have a few minutes to spare? Take Constellation's 2017 Digital Transformation SurveyConstellation will send you a summary of the results. 

  • By Doug Henschen

    Spark Summit East highlights progress on machine learning, deep learning and continuous applications combining batch and streaming workloads.

    Despite challenges including a new location and a nasty Nor’easter that put a crimp on travel, Spark Summit East managed to draw more than 1,500 attendees to its February 7-9 run at the John B. Hynes Convention Center in Boston. It was the latest testament to growing adoption of Apache Spark, and the event underscored promising developments in areas including machine learning, deep learning and streaming applications.

    The Summit had outgrown last year’s east coast home at the New York Hilton, but the contrast between those cramped quarters and the cavernous Hynes made comparison difficult. As I wrote of last year’s event, the audience was technical, and if anything, this year’s agenda seemed more how-to than visionary. There were fewer keynotes from big enterprise adopters and more from vendors.

    spark-progress-2016

    Mataei Zaharia of Databricks recapped Spark progress last year, highlighting growing adoption
    and performance improvements in areas including streaming data analysis.

    The Summit saw plenty of mainstream talks on SQL and machine learning best practices as well as more niche topics, such “Spark for Scalable Metagenomics Analysis” and “Analysis Andromeda Galaxy Data Using Spark.” Standout big-picture keynotes included the following:

    Mataei Zaharia, the founder of Spark and chief technology officer at Databricks, gave an overview of recent progress and coming developments in the open source project. The centerpiece of Zaharia’s talk concerned maturing support for continuous applications requiring simultaneous analysis of both historical and streaming, real-time information. One of the many use cases is fraud analysis, where you need to continuously compare the latest, streaming information with historical patterns in order to detect abnormal activity and reject possibly fraudulent transactions in real time.

    Spark already addressed fast batch analytics, but support for streaming was previously limited to micro-batch (meaning up to seconds of latency) until last February’s Spark 2.0 release. Zaharia said even more progress was made with December’s Spark 2.1 release with advances on Structured Streaming, a new, high-level API that addresses both batch and stream querying. Viacom, an early beta customer, is using Structured Streaming to analyze viewership of cable channels including MTV and Comedy Central in real time while iPass is using it to continuously monitor WiFi network performance and security.

    Alexis Roos, a senior engineering manager at Salesforce, detailed the role of Spark in powering the machine learning, natural language processing and deep learning behind emerging Salesforce Einstein capabilities. Addressing the future of artificial intelligence on Spark, Ziya Ma, a VP of Big Data Technologies at Intel, offered a keynote on “Accelerating Machine Learning and Deep Learning at Scale with Apache Spark.” James Kobielus of IBM does a good job of recapping Deep Learning progress on Spark in this blog.

    Ion Stoica, executive chairman of Databricks, picked up where Zaharia left off on streaming, detailing the efforts of UC Berkeley’s RISELab, the successor of AMPLab, to advance real-time analytics. Stoica shared benchmark performance data showing advances promised by Apache Drizzle, a new streaming execution engine for Spark, in comparison with Spark without Drizzle and streaming-oriented rival Apache Flink.

    Stoica stressed the time- and cost-saving advantages of using a single API, the same execution engine and the same query optimizations to address both streaming and batch workloads. In a conversation after his keynote, Stoica told me Drizzle will likely debut in Databricks’ cloud-based Spark environment within a matter of weeks and he predicted that it will show up in Apache Spark software as soon as the third quarter of this year.

    The Apache Drizzle execution engine being developed by RISELabs promises better
    streaming query performance as compared to today’s Spark or Apache Flink.

    MyPOV of Spark Progress

    Databricks is still measuring Spark success in terms of number of contributors and number of Spark Meetup participants (the latter count is 300,000-plus, according to Zaharia), but to my mind, it’s time to start measuring success by mainstream enterprise adoption. That’s why I was a bit disappointed that the Summit’s list of presenters in the CapitalOne, Comcast, Verizon and Walmart Labs mold was far shorter than the list of vendors and Internet giants like Facebook and Netflix presenting.

    Databricks says it now has somewhere north of 500 organizations using its hosted Spark Service, but I suspect the bulk of mainstream Spark adoption is now being driven by the likes of Amazon (first and foremost) as well as IBM, Google, Microsoft and others now offering cloud-based Spark services. A key appeal of these sources of Spark is the availability of infrastructure and developer services as well as broader analytical capabilities beyond Spark. Meanwhile, as recently as last summer I heard Cloudera executives assert that the company’s software distribution was behind more Spark adoption than that of any other vendor.

    In a though-provoking keynote on “Virtualizing Analytics,” Arsalan Tavakoli, Databricks’ VP of customer engagement, dismissed Hadoop-based data lakes as a “second-generation” solution challenged by disparate and complex tools and access limited to big data developer types. But Tavakoli also acknowledged that Spark is only “part of the answer” to delivering a “new paradigm” that decouples compute and storage, provides uniform data management and security, unifies analytics and supports broad collaboration among many users.

    Indeed, it was telling when Zaharia noted that 95% of Spark users employ SQL in addition to whatever else they’re doing with the project. That tells me that Spark SQL is important, but it also tells me that as appealing as Spark’s broad analytical capabilities and in-memory performance may be, it’s still just part of the total analytics picture. Developers, data scientists and data engineers that use Spark are also using non-Spark options ranging from the prosaic, like databases and database services and Hive, to the cutting edge, such as emerging GPU- and high-performance-computing-based options.

    As influential, widely adopted, widely supported and widely available as Spark may now be, organizations have a wide range of cost, latency, ease-of-development, ease-of-use and technology maturity considerations that don’t always point to Spark. At least one presentation at Spark Summit cautioned attendees not to think of Spark Streaming, for example, as a panacea for next-generation continuous applications.

    Spark is today where Hadoop was in 2010, as measured by age, but I would argue that it’s progressing more quickly and promises wider hands-on use by developers and data scientists than that earlier disruptive platform.

    Related Reading:
    Spark Summit East Report: Enterprise Appeal Grows
    Spark On Fire: Why All The Hype?

    Have a few minutes to spare? Take Constellation's 2017 Digital Transformation SurveyConstellation will send you a summary of the results. 

     

Diamond Sponsors





 

Gold Sponsors







 

SMART IOT LONDON, BE PART OF THE ONLY ENTERPRISE IOT EVENT

UNDERSTAND THE IOT REVOLUTION

REGISTER NOW

Diamond Sponsor

Gold Sponsors

 

Silver Sponsors

 

Event Partners

 

IT Leadership Partner



 

Media Partners

Media Partners

 

Fintech Community Partner