Smart IOT London

21 - 22 MARCH 2018 EXCEL, LONDON

Co-Located With:

  • Big Data World
  • CSE Logo
  • Cloud Expo Europe
  • Data Centre World

THINGS, IN ACTION

Wherever you are on your IoT journey, Smart IoT London is the only place to be

Industry News

The Stack

Constellation Research

  • By Steve Wilson

    It’s been a big month for blockchain.

    • The Hyperledger consortium released the Fabric platform, a state-of-the-art configurable distributed ledger environment including a policy editor known as Composer.
    • The Enterprise Ethereum Alliance was announced, being a network of businesses and Ethereum experts, aiming to define enterprise-grade software (and evidently adopt business speak).
    • And IBM launched its new Blockchain as a Service at the Interconnect 2017 conference in Las Vegas, where blockchain was almost the defining theme of the event.  A raft of advanced use cases were presented, many of which are now in live pilots around the world.  Examples include shipping, insurance, clinical trials, and the food supply chain.

    I attended InterConnect and presented my research on Protecting Private Distributed Ledgers, alongside Paul DiMarzio of IBM and Leanne Kemp from Everledger. 

    Disclosure: IBM paid for my travel and accommodation to attend Interconnect 2017.

    Ever since the first generation blockchain was launched, applications far bigger and grander than cryptocurrencies have been proposed, but with scarce attention to whether or not these were good uses of the original infrastructure.  I have long been concerned with the gap between what the public blockchain was designed for, and the demands from enterprise applications for third generation blockchains or "Distributed Ledger Technologies" (DLTs).  My research into protecting DLTs  has concentrated on the qualities businesses really need as this new technology evolves.  Do enterprise applications really need “immutability” and massive decentralisation? Are businesses short on something called “trust” that blockchain can deliver?  Or are the requirements actually different from what we’ve been led to believe, and if so, what are the implications for security and service delivery? I have found the following:

    In more complex private (or permissioned) DLT applications, the interactions between security layers and the underlying consensus algorithm are subtle, and great care is needed to manage side effects. Indeed, security needs to be rethought from the ground up, with key management for encryption and access control matched to often new consensus methods appropriate to the business application. 

    At InterConnect, IBM announced their Blockchain as a Service, running on the “Bluemix High security business network”.  IBM have re-thought security from the ground up.  In fact, working in the Hyperledger consortium, they have re-engineered the whole ledger proposition. 

    And now I see a distinct shift in the expectations of blockchain and the words we will use to describe it.

    For starters, third generation DLTs are not necessarily highly distributed. Let's face it, decentralization was always more about politics than security; the blockchain's originators were expressly anti-authoritarian, and many of its proponents still are. But a private ledger does not have to run on thousands of computers to achieve the security objectives.  Further, new DLTs certainly won't be public (R3 has been very clear about this too – confidentiality is normal in business but was never a consideration in the Bitcoin world).  This leads to a cascade of implications, which IBM and others have followed. 

    When business requires confidentiality and permissions, there must be centralised administration of user keys and user registration, and that leaves the pure blockchain philosophy in the shade. So now the defining characteristics shift from distributed to concentrated.  To maintain a promise of immutability when you don't have thousands of peer-to-peer nodes requires a different security model, with hardware-protected keys, high-grade hosting, high availability, and special attention to insider threats. So IBM's private blockchains private blockchains run on the Hyperledger Fabric, hosted on z System mainframes.  They employ cryptographic modules certified to Common Criteria EAL 5-plus and FIPS-140 level 3. These are the highest levels of security certification available outside the military. Note carefully that this isn't specmanship.  With the public blockchain, the security of nodes shouldn't matter because the swarm, in theory, takes care of rogue miners and compromised machines. But the game changes when a ledger is more concentrated than distributed.  

    Now, high-grade cryptography will become table stakes. In my mind, the really big thing that’s happening here is that Hyperledger and IBM are evolving what blockchain is really for

    The famous properties of the original blockchain – immutability, decentralisation, transparency, freedom and trustlessness – came tightly bundled, expressly for the purpose of running peer-to-peer cryptocurrency.  It really was a one dimensional proposition; consensus in particular was all about the one thing that matters in e-cash: the uniqueness of each currency movement, to prevent Double Spend.

    But most other business is much more complex than that.  If a group of companies comes together around a trade manifest for example, or a clinical trial, where there are multiple time-sensitive inputs coming from different types of participant, then what are they trying to reach consensus about?

    The answer acknowledged by Hyperledger is "it depends". So they have broken down the idealistic public blockchain and seen the need for "pluggable policy".  Different private blockchains are going to have different rules and will concern themselves with different properties of the shared data.  And they will have different sub-sets of users participating in transactions, rather than everyone in the community voting on every single ledger entry (as is the case with Ethereum and Bitcoin).

    These are exciting and timely developments.  While the first blockchain was inspirational, it’s being superseded now by far more flexible infrastructure to meet more sophisticated objectives.  I see us moving away from “ledgers” towards multi-dimensional constructs for planning and tracing complex deals between dynamic consortia, where everyone can be sure they have exactly the same picture of what’s going on. 

    In another blog to come, I’ll look at the new language and concepts being used in Hyperledger Fabric, for finer grained control over the state of shared critical data, and the new wave of applications. 

     

  • By Doug Henschen

    Cloudera executives can’t talk about IPO or cloud-services rumors. Here what’s on the record from the Cloudera Analyst Conference.

    There were a few elephants in the room at the March 21-22 Cloudera Analyst Conference in San Francisco. But between a blanket “no comment” about IPO rumors and non-disclosure demands around cloud plans -- even whether such plans exist, or not -- Cloudera execs managed to dance around two of those elephants.

    The third elephant was, of course, Hadoop, which seems to be going through the proverbial trough of disillusionment. Some are stoking fear, uncertainty and doubt about the future of Hadoop. Signs of the herd shifting the focus off Hadoop include Cloudera and O’Reilly changing the name of Strata + Hadoop World to Strata Data. Even open-source zealot Hortonworks has rebranded its Hadoop Summit as  DataWorks Summit, reflecting that company’s diversification into streaming data with its Apache NiFI-based Hortonworks DataFlow platform.

    Mike Olson, Cloudera's chief strategy officer, positions the company as a major vendor
    of enterprise data platforms based on open-source innovation.

    At the Cloudera Analyst Conference, Chief Strategy Officer Mike Olson said that he couldn’t wait for the day when people would stop describing his company as “a Hadoop software distributor” mentioned in the same breath with Hortonworks and MapR. Instead, Olson positioned the company as a major vendor of enterprise data platforms based on open-source innovation.

    MapReduce (which is facing away), HDFS and other Hadoop components are outnumbered by other next-generation, open-source data management technologies, Olson said, and he noted that there are some customers who are just using Cloudera’s distributed and supported Apache Spark on top of Amazon S3, without using any components of Hadoop.

    Cloudera has recast its messaging accordingly. Where years ago the company’s platform diagrams detailed the many open source components inside (currently about 26), Cloudera now presents a simplified diagram of three use-case-focused deployment options (shown below), all of which are built on the same “unified” platform.

    Cloudera-developed Apache Impala is a centerpiece of the Analytic DB offering, and it competes with everything from Netezza and Greenplum to cloud-only high-scale analytic databases like Amazon Redshift and Snowflake. HBase is the centerpiece of the Operational DB offering, a high-scale alternative to DB2 and Oracle Database on the one hand and Cassandra, MapR and MemSQL on the other. The Data Science & Engineering option handles data transformation at scale as well as advanced, predictive analysis and machine learning.

    Many companies start out with these lower-cost, focused deployment options, which were introduced last year. But 70% to 75% percent of customers opt for Cloudera’s all-inclusive Enterprise Data Hub license, according to CEO Tom Reilly. You can expect that when Cloudera introduces its own cloud services, it will offer focused deployment options that can be launched, quickly scaled and just as quickly turned off, taking advantage of cloud economies and elasticity.

    Navigating around the non-disclosure requests, here are a few illuminating factoids and updates from the analyst conference:

    Cloudera Data Science Workbench: Announced March 14, this offering for data scientists brings Cloudera into the analytic tools market, expanding its addressable market but also setting up competition with the likes of IBM, Databricks, Domino Data, Alpine Data Labs, Dataiku and a bit of coopetition with partners like SAS. Based on last year’s Sense acquisition, Data Science Workbench will enable data scientists to use R, Python and Scala with open source frameworks and libraries while directly and securely accessing data on Hadoop clusters with Spark and Impala. IT provides access to the data within the confines of Hadoop security, including Kerberos.

    Apache Kudu: Made generally available in January, this Cloudera-developed columnar, relational data store provides real-time update capabilities not supported by the Hadoop Distributed File System. Kudu went through extensive beta use with customers, and Cloudera says it’s seeing a split of deployment in conjunction with Spark, for streaming data applications, and with Impala, for SQL-centric analysis and real-time dashboard monitoring scenarios.

    Business update: CEO Tom Reilly said the company now has more than 1,000 customers, with at least half being large, Global 8,000 companies (the company’s primary target). This includes seven of the top-ten banks and nine of the top-ten telecommunications companies. The company now has 1,600 employees, up from 1,200 last year.

    MyTake On Cloudera Positioning and Moves

    Yes, there’s much more to Cloudera’s platform than Hadoop, but given that the vast majority of customers store their data in what can only be described as Hadoop clusters, I expect the association to stick. Nonetheless, I don’t see any reason to demure about selling Hadoop. Cloudera isn’t saying a word about business results these days -- likely because of the rumored IPO. But consider the erstwhile competitors. In February Hortonworks, which has been public for two years, reported a 39% increase in fourth-quarter revenue and a 51% increase on full-year revenue (setting aside the topic of profitability). MapR, which is private, last year claimed (at a December analyst event) an even higher growth rate than Hortonworks.

    Assuming Cloudera is seeing similar results, it’s experiencing far healthier growth than any of the traditional data-management vendors. Whether you call it Hadoop and Spark or use a markety euphemism like next-generation data platform, the upside customers want is open source innovation, distributed scalability and lower cost than traditional commercial software.

    As for the complexity of deploying and running such a platform on premises, there’s no getting around the fact that it’s challenging – despite all the things that Cloudera does to knit together all those open-source components. I see the latest additions to the distribution, Kudu and the Data Science Workbench, as very positive developments that add yet more utility and value to the platform. But they also contribute to total system complexity and sprawl. We don’t seem to be seeing any components being deprecated to simplify the total platform.

    Deploying Cloudera’s software in the cloud at least gives you agility and infrastructure flexibility. That’s the big reason why cloud deployment is the fastest-growing part of Cloudera’s business. If and when Cloudera starts offering its own cloud services, it would be able to offer hybrid deployment options that cloud-only providers, like Amazon (EMR) and Google (DataProc) can’t offer. And almost every software vendor embracing the cloud path also talks up cross-cloud support and avoidance of lock-in as differentiators compared to cloud-only options.

    I have no doubt that Cloudera can live up to its name and succeed in the cloud. But as we’ve also seen many times, the shift to the cloud can be disruptive to a company’s on-premises offerings. I suspect that’s why we’re currently seeing introductions like the Data Science Workbench. It’s a safe bet. If and when Cloudera truly goes cloud, and if and when it becomes a public company, things will change and change quickly.

    Related Reading:
    Google Cloud Invests In Data Services, Scales Business
    Spark Gets Faster for Streaming Analytics
    MapR Ambition: Next-Generation Application Platform

     

  • By Holger Mueller
    We had the opportunity to attend Ultimate Software’s yearly user conference, UltiConnect in Las Vegas, held March 20th till 24th 2017, at the Bellagio hotel. The conference was well attended with almost 2900 attendees, limited by fire marshal restrictions.

     
     
     
    So, take a look at my musings on the event here: (if the video doesn’t show up, check here)
     
     

    No time to watch – here is the 1-2 slide condensation (if the slide doesn’t show up, check here):
     

    Want to read on? 
     
    Here you go: Always tough to pick the takeaways – but here are my Top 3:
     
    Ultimate Software UltiConnect 2017 Holger Mueller Constellation Research
    Rogers, Hartshorne and UltiPro Learning


    Ultimate has momentum – At the risk that this is getting repetitive (see my last 4 event reports), Ultimate is showing momentum. With 600 customers go lives in 2016, the increase of 700+ more customer attendees’ year over year is remarkable, but less surprising. Go lives trigger product interest and attending the user conference is an effective use of time. With Ultimate offering free training for life, most customers and professionals also take advantage of the numerous training options. And then having Maroon 5 as the main act is certainly something popular for the target audience. It remains remarkable though how Ultimate produces these growth numbers, mostly recording customer growth in North America only. 
     
    Ultimate Software UltiConnect 2017 Holger Mueller Constellation Research
    Ultimate 2016 Highlights

    Xander, first out of the box AI offering among HCM vendors – Ultimate announced Xander, its AI offering, right now available for Ultimate Perceptions. Largely built on the Kanjoya acquisition and expertise. This is a good fit as Kanjoya was specialized on understanding unstructured data, something coming handy for the Perceptions products. But Ultimate did not stop there, in combination with the assets and expertise from the Vestrics acquisition and inhouse work, it has formulated and ambitions cross platform AI vision with Xander. And while we have to see what materializes later in the year for Xander, this marks the first formal launch of any major HCM vendor in the area of AI (if I missed one, let me know!). 
     
    Ultimate Software UltiConnect 2017 Holger Mueller Constellation Research
    Intro of Xander


    Broadest Product Push (ever?)! – Ultimate has traditionally spend a very high percentage of revenue in R&D, but it did not necessarily show in terms of product delivery speed and breadth. For the years I cover the vendor, it seemed more like a ‘sluggish’ pace, with a focus on a single new capabilities per year (see Recruiting, Onboarding etc.) and other housekeeping. One can only expect that R&D resources were nonetheless busy in years before, maybe laying the foundation of what is coming now – a broad push for new product and functionality. A new Learning product (we didn’t go into details, maybe in combination with Schoox), new native mobile products (garnered the most applause by the keynote audience), a new Time Management module, new Reporting capabilities, a new converged UX, a developer hub and finally Xander. A back of the napkin calculation makes this more functionality available and coming in 2017 than what Ultimate announced and made available in 2014, 2015 and 2016. So, congrats for newly found stride.

    MyPOV

    Always good to attend events from vendors that are doing well. Growth leads to other good things, and it is the best ingredient for a successful user conference. A remarkable amount of new capabilities is available or coming during 2017, with vast repercussions for customers in regards of solutions portfolio, e.g. in the areas of Time and Learning. It’s good to see that Ultimate is also addressing UX challenges that we have been hearing (and writing) about since a few years, it makes sense to start with mobile and then try to get a better user interface to the HR users (who not surprisingly are to a certain point ‘clinging’ to the old user interface).

    On the concern side, Ultimate R&D needs to deliver, and services and support teams and most importantly customers need to be ready to uptake and implement these new products and their capabilities. The industry as a collective has struggled to roll out analytical capabilities (predictive analytics mostly) past the HR departments, who on most cases, have set the bar too high, often depriving their enterprise and especially their people leaders from having an ability to make significantly better decisions. And on the commercial Ultimate should find a way to monetize R&D investment, as the rest of the industry does. That does not mean blowing up the price list, but charging for value. Too many ‘free’ capabilities can also be an issue for an enterprise’s HR team. But let’s see if this is a problem when we get there….

    But for now, congrats to Ultimate, which has put its product development efforts in high gear, first results already showing, and garnering the price for being the first major HCM vendor to announce and ship a first version of an AI platform. It will be key to monitor the progress. Stay tuned.

    Want to learn more? Checkout the Storify collection below (if it doesn’t show up – check here).

     



    Find more coverage on the Constellation Research website here and checkout my magazine on Flipboard and my YouTube channel here.
  • By Chris Kanaracus

    Constellation Insights

    LinkedIn is betting large organizations will be willing to pay up to $1,600 per seat per year for a new Enterprise edition of Sales Navigator, which it says will generate higher productivity and results for social selling efforts. Here are the key details from LinkedIn's announcement:

    Until now, if you were looking for a warm introduction to a lead, you could go through your personal LinkedIn connections, or use TeamLink, which pools the networks of all the Sales Navigator seat holders in your company. But we know your reps are probably not connected on LinkedIn to the vast majority of employees at your company, and not every employee in your company needs a seat of Sales Navigator (as much as we’d like that).

    TeamLink Extend solves that by letting anyone in your organization opt-in their LinkedIn network to the TeamLink pool. That means, if you’re trying to reach a prospect, you can quickly see if anyone in your company has a connection with that person, and reach out to your colleague to ask for warm introduction.

    LinkedIn is also integrating Enterprise Edition with its PointDrive tool, which gives salespeople the ability to give prospects more content through a desktop or mobile app instead of an email larded with attachments, giving reps visibility into how the materials are being consumed. 

    Perhaps the most telling piece of news for the longer-term is LinkedIn Enterprise's enhanced CRM integration. Its CRM Sync function will log Sales Navigator activities into CRM systems with a single click. This capability will be available for Salesforce first, not Dynamics CRM, although support is coming for other platforms this year. 

    LinkedIn Enterprise also includes CRM Widgets, which enable users to view Sales Navigator profile details within CRM systems. There are widgets for Salesforce and Dynamics now, with ones for Oracle, NetSuite, SugarCRM, Hubspot, SAP Hybris and Zoho coming soon.

    Analysis: No Walled Garden Here, But One Caution Abounds

    Salesforce CEO Marc Benioff, who was outbid for LinkedIn by Microsoft, complained last year to regulators, alleging that Redmond would close off third-party access to LinkedIn's vast and valuable store of business data in favor of Dynamics CRM. The new integration points for LinkedIn Enterprise Edition suggest that on the contrary, Microsoft sees plenty of money in integrating LinkedIn with competing CRMs. Constellation believes this is a good approach not only for Microsoft but for all customers, as the potential value of alignment of CRM with LinkedIn still has plenty of runway. 

    But the new TeamLink feature shows Microsoft clearly wants to see how much value it can squeeze out of LinkedIn's data pool by leveraging its social graph. There are some challenges here, says Constellation Research VP and principal analyst Cindy Zhou.

    One concern is with how organizations will handle the opt-in to share contacts. The less employees do, the less effective TeamLink becomes, she notes. There's also potential for spamming. "Organizations using TeamLink will need to be aware of the responsibility to properly train users so they don't abuse this additional access to connections," she says. "Ultimately. the connections didn't 'opt-in' for their information to used by a broader enterprise sales team."

    24/7 Access to Constellation Insights
    Subscribe today for unrestricted access to expert analyst views on breaking news.

  • By Carole Low

    The 2017 World Economic Forum focused on dynamic leadership as its theme, which sparked plenty of healthy dialogues and discussions. R "Ray" Wang, Constellation Research Chairman and Principal Analyst, spent time with global leaders in Davos, Switzerland discussing the essential characteristics that embody dynamic leaders and is now sharing with our ecosystem of global leaders who drive digital transformation across diverse, multi-faceted industries.

    Knowing how to balance those key components separates good from great leaders. To lead digital transformation effectively in this AI-driven world, the best leaders will focus on honing their dynamic leadership skills in concrete ways. Find out what Ray's views are by tuning into this Constellation Executive Network (CEN) Member Chat video replay.

     

     

    Every month, one of our featured Constellation analysts discusses what trends in disruptive technologies that forward-thinking business leaders need to pay attention to during exclusive dialogues with Constellation Executive Network members. The complete CEN Member Chat schedule and replays are available here.

    To gain complete access to exclusive conversations with Constellation analysts like this one, we welcome you to get in touch with us about joining the Constellation Executive Network as a premium member. 

    Learn More about Constellation Executive Network Premium Membership

  • By Carole Low

    If you want to sound remotely informed on the latest disruptive technology trends as a business leader, you can start by mentioning blockchain. People's eyes either glaze over or light up. You have probably noticed that there is a lot of speculation by self-appointed thought leaders. Blockchain hype has been rampant when you consider the widespread implications of removing "middlemen" from transactions. 

    If you want a quick primer on the myths and realities of blockchain revealed, check out this Constellation Executive Network (CEN) Member Chat, which is a video replay of Steve Wilson, Constellation VP, Principal Analyst and a blockchain expert. 

     

    Every month, one of our featured Constellation analysts discusses what trends in disruptive technologies that forward-thinking business leaders need to pay attention to during exclusive dialogues with Constellation Executive Network members. The complete CEN Member Chat schedule and replays are available here.

    To gain complete access to exclusive conversations with Constellation analysts like this one, we welcome you to get in touch with us about joining the Constellation Executive Network as a premium member. 

    Learn More about Constellation Executive Network Premium Membership

  • By Andy Mulholland

    The Digital Business model with its dynamic adaptive capabilities to react to events with intelligently orchestrated responses forms from Services requires a very different enabling infrastructure to that of current Enterprise IT systems. As the Enterprise itself decentralizes into fast moving agile operating entities operating under an OpEx (costs allocated to actual use) management model so must the Infrastructure support with similar functional structure.

    The Technology that creates and supports Digital Business does not resemble that deployed in support of Enterprise Client-Server IT systems. Neither is it a rehash of the standard Internet Web architecture. Instead a combination of Cloud Technology, both at the center and increasingly the edge, running Apps, in the form of Distributed Apps, linked by massive scale IoT interactions, and increasingly various forms of AI intelligent reactions represent a wholly different proposition.

    In existing Enterprise IT the arrangement and integration of the technology complexities is defined by Enterprise Architecture, the term has not been used above deliberately to highlight the difference. In contrast with the enclosed, defined Enterprise IT environment where it is necessary to determine the relationship between the finite number technology elements; a true Digital Enterprise operates dynamically between an infinite numbers of technology elements, internally and externally.

    Enterprise IT, for the most part, supports Client-Server applications, as evidenced in ERP, and is focused on ensuring the outcomes of all transactions will maintain the common State of all data. To do this the dependencies of all technology elements have to be identified in advance and integrated in fixed close-coupled relationships. It is important to remember that Enterprise Architecture was developed to deploy the Enterprise Business model defined by Business Process RE-engineering, (BPR).

    It is vital to recognize that the Enterprise Business model and the Technology model are, or should be, two sides of the same coin coherently working together to enable the Enterprise to compete in its chosen market and manner. The introduction of a Digital Business model introduces a completely different set of technology requirements, and importantly requires to reverse accepted IT Architecture by requiring Stateless, Loose coupled, orchestrations to support Distributed Environments.

    These simple statements cover some very complicated issues, and before going further the three important issues should be identified and clarified within the context used here;

    1. Stateful means the computer or program keeps track of the state of interaction, usually by setting values in a storage field designated for that purpose. Stateless means there is no record of previous interactions and each interaction request has to be handled based entirely on information that comes with it. Reference http://whatis.techtarget.com/definition/stateless
    2. Tightly-Coupledhardware and software are not only linked together, but are also dependent upon each other. In a tightly coupled system where multiple systems share a workload, the entire system usually would need to be powered down to fix a major hardware problem, not just the single system with the issue. Loosely-Coupled describes how multiple computer systems, even those using incompatible technologies, can be joined together for transactions, regardless of hardware, software and other functional components. References http://www.webopedia.com/TERM/T/tight_coupling.html http://www.webopedia.com/TERM/L/loose_coupling.html
    3. Digital Business is the creation of new business designs by blurring the digital and physical worlds. ... in an unprecedented convergence of people, business, and things that disrupts existing business models. Reference https://www.i-scoop.eu/digital-business .

    Clearly there is a need for something to act as an equivalent for the Enterprise Architecture, and indeed there is no shortage of activities to create ‘Architectural’ Models for IoT. There is a fundamental challenge in the sheer width of what constitutes a Digital Market connected through IoT in different industry sectors. Though it might seem that the approach for a Smart Home is not likely to have much in common with Self-driving cars, other than both being part of a Smart City, at the level of the supporting infrastructure there are minimal differences.

    The result is an over whelming abundance of standards bodies, technology protocols and architectural models that will in the short term confuse rather than assist. A read through the listings covering each of those areas here will prove this point. Whilst there is no doubt the devil is in the detail and these things matter IoT deployments should be driven from the Digital Business model outlined in the previous blog post.

    A blog is not the format to examine this topic in detail; instead the aim is to provide an overall understanding of a workable approach. And to make use of the views and solution sets available from leading Technology vendors to provide greater detail. The manner of breaking down the ‘architecture’ into abstracted four conceptual layers illustrated below matches almost exactly with the Technology vendors own focus points.

    Enterprise Architecture methodologies start with a conceptual stage; an approach designed to provide clarification of the overall solution and outcome. This is necessary to avoid the distraction of the specific products details, often introducing unwelcome dependencies, at the first stage of the shaping the solution/outcome vision.

    The four layers illustrated correspond to the major conceptual abstractions present in building, deploying, and operating the necessary Technology model for a Digital Business. This blog focuses on the Dynamic Infrastructure and each of the following blogs in the series will focus on one of the abstracted layers.

    The following concentrates on the role and in particular on Enterprise owned and operated infrastructure. The same basic functionality could be provided from a Cloud Services operator. There are significant issues around latency and risks in certain areas, such as ‘real time’ machinery operations as an example, that will lead to the selection of on-premises Dynamic Infrastructure capability. It is most likely that a mix of external and internal Dynamic Infrastructure will be deployed in most Enterprises with the Distributed Services Technology Management layer providing the necessary cohesive integration. A point made in the following Part 3b of this series.

    The Dynamic Infrastructure shares many of the core traits of Internet and Cloud Technology in providing capacity, as and when required, in response to demands. The development of the detailed specification started in 2012 with the publication by Cisco of a white paper calling for a new model of distributed Cloud processing across a network. Entitled ‘Fog Computing’ this concept became increasingly important with the development of IoT redefining requirements.

    In November 2015 a group of leading Industry Venders, (ARM, Cisco, Dell, Intel and Microsoft), founded the OpenFog Consortium. Today there are 56 members including a strong representation from the Telecoms Industry. Cisco has developed its products and strategy in tune with the vision statement of the OpenFog Consortium that states the requirement to be;

    “Fog computing is a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things. By extending the cloud to be closer to the things that produce and act on IoT data, fog enables latency sensitive computing to be performed in proximity to the sensors, resulting in more efficient network bandwidth and more functional and efficient IoT solutions. Fog computing also offers greater business agility through deeper and faster insights, increased security and lower operating expenses”

    It is worth pointing out there are subtle, but important, differences between Fog Computing, and pure Edge Based Cloud Computing. Edge Based solutions more closely resemble a series of closed activity pools with relatively self contained computational requirements, where as Fog Computing processing is more interactive and distributed using a greater degree of high level service management from the Network. Naturally the two definitions overlap and this together with other terms can be confusing. In practice, it is important to note that “Fog” certainly includes “Edge”, but the term Edge is often used indicate a more standalone functionality.

    Three Technology vendors have focused their products and solution capabilities around providing such an infrastructure, with its mix of connectivity and processing triggered by a sophisticated management capability. Each vendor uses different terminology and has published their definitions on what they identify as the challenges and requirements.

    Constellation Research would like to thank Cisco, Dell and HPE for contributing the following overviews that describe their point of view in respect of building and operating a Dynamic Infrastructure. Each vender also provided links to enable a more detailed evaluation to be made of their approach and products.

     

    Cisco’s Digital Network Architecture

    At Cisco, we are changing how networks operate into an extensible, software driven model that makes networks simpler and deployments easy. Customer requirements for digital transformation go beyond technology such as IOT and require that the network can handle changes, security, and performance in a policy-based manner designed around the application and business need.

    Network Architecture is the framework for that network change moving from a highly resources intensive and time consuming way of deploying network services and segments to a model that is built to speed these processes and reduce cost. With DNA, we are focusing on automating, analyzing securing and virtualization of network functions. Networks need to be more than just a utility, they need to be business driving and secure in the proactive and reactive sense. To do this Cisco is building on our industry leading security products combined with our industry leading access products (including SD-WAN, wireless, and switching) we are helping customers change how they fundamentally work and to embrace the digital transformation.

    Some examples of our continued innovation in this space include products like APIC-EM, the central engine of our Cisco DNA. APIC-EM delivers software-defined networking capabilities with policy and a simple user interface. It offers Cisco Intelligent WAN, Plug and Play for deploying Cisco enterprise routers, switches, and wireless controllers, Path Trace for easy trouble shooting, and Cisco Enterprise Service Automation.

    Cisco is more than a networking vendor, we partner with our customers at all levels. We strive to understand not only what customers need at a technical and IT level, but what they need as a business. Cisco brings consistent and long term investment into its products and services, adding value and features constantly. Nobody in the networking market invests in R&D and listens to customers like Cisco does. Cisco knows that the changing face of IT is to help bridge the gap to cloud and make sure that business needs are met with agile solutions that enhance the business. With Cisco DNA, CIOs, managers, and administrator all get what they need to move forward with digital transformation and IOT.

    The details of the Cisco range of products, and solutions, can be found in in three places; One, Two, Three

     

    Dell Technologies Internet of Things Infrastructure

    With the industry’s broadest IoT infrastructure portfolio together with a rapidly growing ecosystem of curated technology and services partners, Dell Technologies cuts through the complexity and enables you to access everything you need to deploy an optimized IoT solution from edge to core to cloud. By working with Dell’s infrastructure and curated partners they also provide proven use-case specific solution blueprints to help you achieve faster ROI. Dell has strong credibility to play in Industrial IoT from its origins in the supply of computing to the Industrial sector, as an early leader in sensor-driven automation, and through the EMC acquisition, which adds additional expertise in storage, virtualization, cloud-native technologies, and security and system management. Further, Dell Technologies is leading multiple open source initiatives to facilitate interoperability and scale in the market since getting access to the myriad data generated by sensors, devices, and equipment is currently slowing down IoT deployments.”

    The challenge with IoT is to securely and efficiently capture massive amounts of data for analytics and actionable insights to improve your business. Dell Technologies enables the flexibility to architect an IoT ecosystem appropriate for your specific business case with analytics, compute, and storage distributed where you need it from the network’s edge to the cloud.

    Part of Dell’s net-new investment in IoT is a portfolio of purpose-built Edge gateways with specific I/O, form factor and environmental specifications to connect the unconnected capturing data from a wide variety of sensors and equipment. The Dell Edge Gateway line offers processing capabilities to start the analytics process to cleanse the data as well as comprehensive connectivity to ensure that the critical data can be integrated into digital business systems where insights can be created and business value generated. These gateways also offer integrated tools for both Windows and Linux operating systems to ensure that the distributed architecture can be secured and managed. Reference here

    Further, Dell EMC empowers organizations to transform business with IoT as part of the digitization initiative. The Dell EMC’s converged solution including Vblock Systems, VxRack Systems, VxRail Systems, PowerEdge and other Dell EMC products are prevalent in the core data centers for enterprise applications, big data and video management software (VMS) as well as for cloud native applications. Dell simplifies how businesses can tap IoT as part of their digital assets — from edge with Dell’s Edge Gateways tied to sensors and operational technology to core data center and hybrid cloud from Dell EMC plays an crucial role for blending historical and real-time analytics, processing and archival. The Dell EMC Native Hybrid Cloud Platform, a turnkey digital platform accelerates time to value by simplifying the use of in IoT as part of cloud native app deployment. Included in this portfolio is the Analytic Insights Module, a fully-engineered solution providing self-service data analytics with cloud-native application development into a single hybrid cloud platform, eliminating the months it takes to build your own.

    The details of Dell range of products, and solutions, can be found here

     

    HPE’s Hybrid IT

    HPE believes that there are a number of dimensions to dynamic infrastructure. It is estimated that 40-45% of IoT data processing will occur “at the edge” - close to where the sensors and actuators are. This is why they have created their “EdgeLine” range of edge compute devices. HPE calls this the first dimension of Hybrid IT - getting the right mix of edge and core compute.

    While “real-time” processing of IoT data will occur both at the edge and at the core, “deep analytics” like design simulations and deep learning that a digital world requires may need specialised computers because Moore’s law is running out of steam. HPE believes, another dimension to Hybrid IT is the mix of conventional versus specialised compute. HPE’s specialised compute includes their SuperDome and the SGI ranges.

    Digitiziation is forcing a change in the architecture of applications. Gone are the three tier, web client to app server to database applications. These are replaced by application and service meshes - meshes of services that applications can call. This is why micro-service and containers are becoming so popular (Docker has been downloaded over 4 billion times, for example). HPE built its Synergy servers with this new application architecture in mind:

    • CPU, storage and fabric can be treated as independently scalable resource pools. This scaling can be applied to both physical infrastructure (for containers running directly on top of the hardware) and virtual machines.
    • Infrastructure desired state can be specified in code. This allows the infrastructure on which an application is run to put under source control with the source code
    • Because containers carry their required infrastructure specification with them, this specification can be given directly to the Synergy server for provisioning before containers are layered on top

    Full details on HPE Infrastructure products can be found here.

     

    Addendum

    A distributed system is a model in which components located on networked computers communicate and coordinate their actions by passing messages.[1] The components interact with each other in order to achieve a common goal. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other by message passing. https://en.wikipedia.org/wiki/Distributed_computing

    A Smart System is a distributed, collaborative group of connected Devices and Services that react to a continuous dynamic changing condition by invoking individual, or groups, of Smart Services to deliver optimized outcomes. The term originated in industrial automation and therefore the current Wikipedia definition seems somewhat limited in its scope when compared to the wider IoT use of the term.

  • By Chris Kanaracus

    Constellation Insights

    Earlier this month, it emerged that a major Amazon Web Services outage was caused by an engineer making a typo while debugging a system. While not the same thing, the accidental exposure of hundreds of Australian politicians and staffers' private mobile phone numbers serves as another reminder that when it comes to security, human error can trump any number of technological measures. The Sydney Morning Herald has the details:

    The Department of Parliamentary Services failed to properly delete the numbers before it published the most recent round of politicians' phone bills on the Parliament House website, potentially compromising the privacy and security of MPs from cabinet ministers down.

    While in previous years the numbers were taken out of the PDF documents altogether, this time it appears the font was merely turned white - meaning they could still be accessed using copy and paste.

    The only numbers absent were those of the very top cabinet ministers including Prime Minister Malcolm Turnbull, Treasurer Scott Morrison, Attorney-General George Brandis and a handful of others.

    The department has blamed a private contractor, TELCO Management, for the stuff-up. 

    DPS officials have since deleted the private numbers after receiving word about them from the newspaper.

    "I really wish we were all a bit more self-conscious about this style of error," says Constellation Research VP and principal analyst Steve Wilson. "We have a host of office tools which are incredibly rigid when you think about it. Our computers are wretchedly unforgiving. 

    "In this latest case, someone has deleted some sensitive data in a file, or they thought they had deleted it, but no, the data was still there, hidden, and it cropped up again when the file was moved to a public location," Wilson adds. 

    As it happens, the Australian government is becoming a bit notorious for this kind of thing. Other recent episodes include the release of passport details of 20 or so visiting heads of state, Wilson notes. And worse, the inadvertent publication of names and addresses and other details of 10,000 refugee asylum seekers, many of whom were in personal danger in their countries of origin. "Are we just too laid back down under?" he says.

    The truth is that these are the "sorts of mistakes anyone without a master's degree in computing might make," Wilson adds. "Computers are like nitroglycerine. They're kind of safe if you're unnaturally careful in the way you handle them."

    Moreover, when correcting a security breach it's crucial to consider other ways compromised data may still be exposed. The website Junkee found that even after the DPS deleted the phone bills, copies of them remained available in Google's cache and the numbers were actually openly visible. They've since been removed from Google's servers.

    24/7 Access to Constellation Insights
    Subscribe today for unrestricted access to expert analyst views on breaking news.

Diamond Sponsors





 

Gold Sponsors







 

SMART IOT LONDON, BE PART OF THE ONLY ENTERPRISE IOT EVENT

UNDERSTAND THE IOT REVOLUTION

REGISTER YOUR INTEREST FOR 2018

Diamond Sponsor

Gold Sponsors

 

Silver Sponsors

 

Keynote Sponsor



 

Event Partners

 

IT Leadership Partner



 

Media Partners

Media Partners

 

Fintech Community Partner


 

Enterprise Partner


 

Networking Provider Partner