Latest blog entries http://hurwitz.com/blogs/judith-balancing-act/latest Tue, 23 May 2017 22:38:59 +0000 Joomla! - Open Source Content Management en-gb Are Technology Shakeups in Store for 2015? http://hurwitz.com/blogs/judith-balancing-act/entry/are-technology-shakeups-in-store-for-2015 http://hurwitz.com/blogs/judith-balancing-act/entry/are-technology-shakeups-in-store-for-2015

Before I start with my predictions, let me explain what I mean by a prediction. I believe that predictions should not be about the end of a technology cycle but the timing for when a issue begins to gain traction that will result in industry shifts.  As I pointed out in my book, Smart or Lucky? How Technology Leaders Turn Change Into Success (Josey Bass, 2011), important industry initiatives and changes usually require decades of trial and error before they result in significant product and important trends.  So, in my predictions, I am pointing out changes that are starting.

a know that the rule is that you need to come up with ten predictions when a new year is about to start. But I decided to break the rule and stick with seven. Call me a renegade. I think that we have a very interesting year taking shape. It will be a year where emerging technologies will move out of strategy and planning into execution. So, I expect that 2015 will not be business as usual. There will be political shakeups in both IT and business leadership as technology takes on an increasingly more strategic role. Companies need to know that the technology initiatives that are driving revenue are secure, scalable, predictable, and manageable. While there will always be new emerging technologies that we take us all by surprise, here is what I expect to drive technology execution and buying plans in the coming year.

1. Hybrid cloud Management will become the leading issue for businesses as they rely on hybrid cloud.

It is clear that companies are using a variety of deployment models for computing. Companies are using SaaS, which obviously is a public cloud-based service. They are using public cloud services to build and sometimes deploy new applications and for additional compute and storage capacity. However, they are also implement private cloud services based on the level of security and governance. When cloud services become commercial offerings for partners and customers, economics favors a private cloud. While having a combination of public and private is pragmatic, this environment will only work with a strong hybrid cloud management service that balances workload across these deployment models and manages how and when various services are used.

2. Internet of Things (IoT) will be dominated by security, performance, and analytics. New players will emerge in droves.

Internet of Things will be coming on strong, since it is now possible to store and analyze data coming from sensors on everything from cars to manufacturing systems and health monitoring devices. Managing security, governance and overall performance of these environments will determine the success or failure of this market. Businesses will have to protect themselves against catastrophic failure – especially when IoT is used to manage real time processes such as traffic management, sensors used in monitoring healthcare, and power management. There will be hundreds of startups. The most successful ones will focus on security, management, and data integration within IoT environments.

3.Digital Marketing disillusion sets in - it is not a substitute for good customer management.

Many marketing departments are heavily investing in digital marketing tools. Now corporate management wants to understand the return on investment. The results are mixed. First, companies are discovering that if they do not improve their customer care processes along with digital marketing software and processes, digital marketing is useless. In fact, it may actually make customer satisfaction worse since customers will be contacted through digital marketing services but will not get better results. This will result in a backlash. Unfortunately, it may be the messenger who is blamed rather than the culprit – poor customer care.

4.Cognitive computing will gain steam as best way to capitalize on knowledge for competitive advantage.

The next frontier in competitive differentiation is how knowledge is managed. The new generation of cognitive solutions will help companies gain control of their unstructured data in order to create solutions that learn. Expect to see hundreds of start ups emerging that combine unstructured data management with machine learning and statistical methods, advanced analytics, data visualization, and Natural Language Processing

5.IT will gain control of brokering and managing cloud services to ensure security and governance.

For the past five years or more business units have been buying their own public cloud compute and storage services, bypassing the IT organization. Many of these organizations were frustrated with the inability of IT to move fast enough to meet their demands for service. When these departments were experimenting with cloud services, expenses could easily been hidden in discretionary accounts. However, these public cloud services move from pilot and experimentation to business applications and services. There are implications for cost, governance and management. As often happens when emerging technology becomes mainstream, IT is being asked to become the broker for hybrid cloud services.

6. Containerization and well designed APIs are becoming the de facto method for creating cross platform services in hybrid computing environment.

One of the benefits of a services architecture is that it is possible to truly begin to link computing elements together without regard to platform or operating system. The maturation of container technology and well-designed APIs are going to be a major game changer in 2015. These issues of containers and APIs are linked together because they are focused on abstraction of services and complexity. These abstractions are an important step towards moving from building applications to linking services together.

7.Data connectivity combined with business process emerging as biggest headache and opportunity in hybrid.

Data connectivity and business process issues are not a new problem for businesses. However, there is a subtle change will major ramifications. Because business units tend to control their own data both on premises and in SaaS applications, it is increasingly difficult for business leadership to create a unified view of data across a variety of business units. Without being able to bring data and process across silos puts businesses at risk. This complexity will emerge as a major challenge for IT organizations in 2015.


Read More]]>
judith.hurwitz@hurwitz.com (Judith Hurwitz) Vendor Strategy Tue, 16 Dec 2014 17:24:48 +0000
Red Hat Summit Focuses on the Business Impact of Customers' Hybrid Cloud Migrations http://hurwitz.com/blogs/judith-balancing-act/entry/red-hat-summit-2017-showing-the-business-impact-of-hybrid-cloud-migration http://hurwitz.com/blogs/judith-balancing-act/entry/red-hat-summit-2017-showing-the-business-impact-of-hybrid-cloud-migration

 

 

Mindful that the move to hybrid cloud is accelerating – and that business priorities are increasingly driving IT buying decisions – Red Hat executives at Red Hat Summit 2017 highlighted the business impact of adopting their portfolio of open-source software products.

The focus at this year’s Summit: Providing developers and IT operators with tools that reduce the number of individual steps needed to complete many repetitive tasks. Integration of functionality, and unified consoles for monitoring and management are intended to reduce operational costs for the business. This approach addresses the “Dev” and the “Ops” personnel within an IT organization – both of which impact overall operational efficiency.

Security, availability, and consistency – the need to address all of these brings home the point that hybrid clouds must extend the reliability of enterprise IT into the cloud-computing world. Bringing those attributes to cloud development and deployment tools is a high priority for Red Hat, which is already widely used in enterprise and cloud environments.

 

Patterns of Adoption Are Changing

Hybrid cloud adoption, linking on-premises enterprise IT with off-premises public clouds, is becoming widely accepted among IT and business managers. This push into the cloud has been attributed to the process of digital transformation to change their business model, and to compete more effectively. Businesses want to gain IT flexibility and business agility, as their industries (e.g., retail, financial services, healthcare) cope with dramatic change.

Business managers are highly influential in making new technology decisions, requiring their buy-in to technology adoption. In large businesses, their approval is absolutely essential to hybrid-cloud planning and deployments. In small/medium business (SMB), the decision to push more workloads to public cloud providers is often driven by cost and operational priorities.

Without support from business managers, IT organizations will find it increasingly difficult to secure the funding for next-generation systems and software. Enterprise customers describing their Red Hat deployments at the Summit conference included the Disney/Pixar animation studio, the Amadeus airline reservation system, and Amsterdam’s Schipol Airport.

 

Products Aimed at Dev and Ops

At the Summit, many of Red Hat’s announcements focused on simplification and ease of use for two main roles within IT organizations: software developers and IT operations personnel. Integration of functionality, and unified consoles for monitoring and management are intended to reduce operational costs for the business.

With this approach, developers and operations personnel can each focus on their primary tasks, working more effectively, as IT silos are removed from the infrastructure, and workloads move to available systems and storage resources. One key example: The encapsulation of applications in Linux containers has the practical effect of separating application development from infrastructure deployment, via abstraction. By leveraging containers, IT organizations can move applications through the dev/test/production pipeline more quickly.

Here are three product categories addressed by Red Hat announcements at the Summit: 

  • Containers. The Red Hat OpenShift Container Platform 3.5 allows OpenShift containers, based on Red Hat Enterprise Linux (RHEL), to work with open-source Kubernetes orchestration software and Docker. The Linux containers provide a runtime for applications code inside containers that also leverages technology from other widely adopted open-source projects.
  • OpenShift.io. The platform combines the features of several widely adopted development tools, helps teams manage work items through the development process, and prompts developers with code options as they build new cloud-ready applications. OpenShift.io uses browser-based IDE and Eclipse Che.
  • Ansible. Red Hat is extending Ansible, Red Hat’s agentless automation tool that provides automation services to the full portfolio of Red Hat solutions.Red Hat acquired Ansibleand its technology in 2015. Ansible is now integrated with Red Hat CloudForms cloud management platform, Red Hat Insights proactive analytics, and Red Hat OpenShift Container Platform.

The greatest opportunities for growth, according to Red Hat executives, are in app/dev collaboration tools, middleware and cloud management software. The $2.5 billion company plans to accelerate its top-line revenue growth by leveraging its partnerships with hardware systems, software, services and cloud service providers (CSPs).

 

Partnering with Cloud Service Providers (CSPs)

As it grows its ecosystem, Red Hat in deepening its partnerships with cloud service providers (CSPs) as customers. Certainly, many enterprise applications will remain on-premises -- inside the firewalls of data centers -- due to security and data governance concerns. However, the adoption of cloud computing is increasing, with more enterprise workloads migrating to CSPs, including Amazon Web Services, Microsoft Azure and Google Cloud Platform (GCP).

Red Hat announced a strategic alliance with Amazon Web Services (AWS). Red Hat wants to tap the deep reservoir of AWS developers as it grows its sales of OpenShift tools, JBoss middleware, and RHEV virtualization. Through this alliance, Red Hat will natively integrate access to AWS services into the Red Hat OpenShift Container platform. This gives hybrid cloud developers a new way to gain direct access to AWS cloud services, including Amazon Aurora, Amazon EMR, Amazon Redshift, Amazon CloudFront, and Elastic Load Balancing.

Other CSP relationships are important, because many customers are moving to multi-cloud strategies. Red Hat is working with Google Cloud Platform (GCP) on open-source projects, including the ongoing development of Kubernetes orchestration software. Linux containers support multiple programming languages, and provide a runtime environment for applications built on microservices. This allows them to scale up by scaling “out” in a style originally developed by CSPs for hyperscale applications.

Microsoft Azure and Red Hat delivered at least two joint Summit presentations in Boston, showing their increasing co-presence in the cloud computing world. It is tangible evidence of the way that the cloud has evolved, with Linux and Windows workloads running side-by-side, both on-premises and off-premises, on the Microsoft Azure cloud.

 

Business Objectives and Enterprise Clouds

Red Hat sees its future opportunity in addressing containers and micro-services for end-to-end application development for hybrid cloud. Another focus for the company is improving cloud management for customers that are migrating more business logic into the cloud. Red Hat plans to build on the work it has done with early adopters of hybrid cloud -- and to make it easier for new customers and prospects to consider migrating more business workloads to hybrid clouds.

Specifically, three key ingredients for expanding Red Hat's total available market are: developing container technology, building on DevOps toolsets and automating cloud management. That is why the business marketing messages are so important for the opportunity that Red Hat is embracing as it plans to grow its top-line revenue in 2017.


Read More]]>
jean.bozman@hurwitz.com (Jean Bozman) Cloud Computing Tue, 16 May 2017 22:32:21 +0000
A tribute to Marcia Kaufman: A Woman of Valor http://hurwitz.com/blogs/judith-balancing-act/entry/a-tribute-to-marcia-kaufman-a-woman-of-valor http://hurwitz.com/blogs/judith-balancing-act/entry/a-tribute-to-marcia-kaufman-a-woman-of-valor

Marcia and I would always tell people that we met in high school when we were both 15 years old. But that doesn’t tell the story of a friendship and a business partnership that began in 2003. I was an entrepreneur at a crossroads of my career. I had lost the company I had started in 1992 and walked away from a company that I started in 2002. I wasn’t sure what I wanted to do next and I will admit that I was afraid that I would fail. Then one day I got a phone call from Marcia. We had been in touch off and on over the years. Marcia too was at a crossroads. She had recently left a job as an industry analyst and was also trying to decide what to do next.

She said to me, “If you are thinking about starting another company I am interested.” To be truthful, I wasn’t sure that I was ready for what I knew would be difficult. But I agreed that we should meet for coffee. Despite my misgivings, and perhaps because of Marcia’s infectious optimism, I decided that it made sense to give it a try.

And I was right. It was hard. In the beginning we struggled to find projects and position our very tiny company. We taught each other a lot about working as a team, about technology, and about having fun while working hard. I could tell you hundreds of stories about our adventure over 13 years. There was the time that we worked all Christmas day so that we could finish a research paper. I could tell you about the one time that Marcia yelled at the top of her lungs at a freelance researcher who was working with us on a project. It was unusual because Marcia could get along with everyone – except this one very annoying writer. I could tell you about all the times that we would meet at the airport to go to conferences in Las Vegas. While I dreaded going to Vegas, it energized her. She made these trips fun. She loved the concerts that took place during the shows. She loved to dance and sing. I often left early to get some sleep. But Marcia never seemed to tire and stayed long after I went to bed.

I would often push us to take on new projects, such as the many books we wrote together. She would look at me as though I was crazy (which I probably was) but she would never say no. Even when she was sick, we worked together on the hardest writing project we ever undertook – cognitive computing and big data analytics. It was a wonderful book and a testament to Marcia’s brilliance and perseverance.

Over the last three years, it became harder and harder for Marcia to work. This made her angry, because she loved researching, learning, and writing. Over the years, she became a master writer. She was widely respected and deeply loved. She would tell me in moments when the two of us were together how very sick she was. She was quite aware of her condition, but she continued to work. When she couldn’t come into the office, she would work at home. Her doctor was shocked that she was still working. In fact, I remember Marcia telling me that her doctor expected her to stop working and just take care of herself. She continued to work until the disease finally made it impossible. With every setback she would first say to me, ‘”Oh, Judith, I just got such bad news. And in the next sentence she would tell me, “…but I am going to beat it.” In fact, the last time I visited Marcia when she was in rehab two weeks before she died, she told me that she had come to accept what was happening to her. But in characteristic Marcia fashion, her next words were, ‘”But I am still going to fight.”

This is at least the 6th draft of this note I that I have written, trying to capture the Marcia knew I loved Marcia as a friend and colleague. I miss her strength, her honesty, her intensity, her kindness, her elegance, and her love of life. I don’t think that I will ever meet another person like Marcia. She held onto life with such fervor.

As brokenhearted as I am, I know that Marcia lived life as fully as anyone I have ever known. I will miss you forever and you will always be in my heart.


Read More]]>
judith.hurwitz@hurwitz.com (Judith Hurwitz) Uncategorized Sun, 26 Mar 2017 14:31:47 +0000
Redfish Emerges as an Interoperability Standard for SDI http://hurwitz.com/blogs/judith-balancing-act/entry/redfish-emerges-as-an-interoperability-standard-for-sdi http://hurwitz.com/blogs/judith-balancing-act/entry/redfish-emerges-as-an-interoperability-standard-for-sdi

The world’s data centers are working to adopt Software Defined Infrastructure (SDI) – but they are far from reaching their goals. The single biggest challenge in SDI is achieving interoperability between many kinds of hardware. Without that, a data center’s systems become a Tower of Babel, preventing IT system admins from seeing a unified view of all resources – and managing them.

Built to leverage virtualized infrastructure, SDI will be easier to achieve if there are more bridges between platforms – leading to better management. This blog focuses on an emerging management standard called Redfish, which is designed to help make SDI a day-to-day reality for hybrid cloud.

 

Seeking More Unified Management for Software-Defined Infrastructure (SDI) 

Redfish addresses an everyday reality: Most large organizations have “inherited infrastructure” based on years of successive IT decisions – and waves of systems deployments. Multi-vendor and mixed-vendor environments are the norm in enterprise data centers – but most customers would prefer to see more unified views of all devices under management. While many have installed software-defined storage and servers – most have not yet adopted software-defined networks.

That’s why we see Redfish APIs a practical step toward SDI – especially for enterprise customers with large heterogeneous, multi-vendor installations.

Redfish offers a standardized way to address scalable hardware from a wide variety of vendors. Just as important is its growing ecosystem, as it is adopted by a large and growing group of vendors. To keep this multi-vendor technology effort moving along, the Redfish APIs are being managed by the Distributed Management Task Force (DMTF) through its Scalable Platforms Management Forum (SPMF).

 

How the Technology Works           

Here’s how the technology  works:  Built on RESTful APIs, and leveraging JSON, Redfish is a secure, multi-node-capable replacement for IPMI-over-LAN links. It manages servers, storage, network interfaces, switching, software and firmware update services. This presents a wide range of data center devices that can be managed via the Redfish interfaces.

It’s important to note that standards efforts often fail if there is not enough buy-in by the vendors working to implement those standards. However, we’re finding that Redfish is drawing support from a broad array of hardware and software vendors.

 

What's New

A flurry of Redfish announcements came in August, 2016. Following that, there was, indeed, a long silent period. But in January, 2017,  a Host Interface Specification was added to the existing TCP/IP-based out-of-band Redfish standard. The new specification was expanded to allow applications and tools running on an Operating System to communicate with the Redfish management service.

If we were to take a snapshot of the development process, we would see that it has matured since 2015. Now, a parallel project called Swordfish, being developed by SNIA (Storage Networking Industry Association) members, is focused on storage management. Swordfish is designed make it easier to integrate scalable solutions into their hyperscale and cloud data centers. Because Swordfish is an extension of Redfish, it uses the same RESTful interfaces and JavaScript Object Notation (JSON) to seamlessly manage storage equipment and storage services, in addition to servers.

 

A Pragmatic Solution for Hybrid Clouds

In our view, the DMTF’s decision to support RESTful APIs is a pragmatic approach for customers, who won’t have to throw out familiar software tools in order to build unified views of all devices under management. For customers, the important thing is that Redfish can be used within enterprise data centers – and across hybrid clouds spanning multiple data centers and CSP public clouds.

It will fit with RESTful APIs and JSON, which are already widely adopted by data centers. Importantly, a growing group of hardware and software vendors already support Redfish. This group includes: American Megatrends, Broadcom, Cisco, Dell EMC, Ericsson AB, Fujitsu, Hewlett Packard Enterprise (HPE), Huawei, IBM, Insyde, Inspur, Intel, Lenovo, Mellanox, Microsemi, Microsoft, NetApp, Oracle, OSIsoft, Quanta, Supermicro, Vertiv, VMware and Western Digital.

Clearly, there is more work to be done, and more “pieces” to solve the interop puzzle need to be put in place. The fact that Redfish is being supported by many companies – and that some of them are direct competitors – is a good sign for wider adoption.

The reason for their cooperation: interoperability is table stakes for SDI.

 


Read More]]>
jean.bozman@hurwitz.com (Jean Bozman) Cloud Computing Tue, 14 Mar 2017 20:19:14 +0000
IBM Quantum Computing Jumps to Commercial Use Via Cloud http://hurwitz.com/blogs/judith-balancing-act/entry/ibm-quantum-computing-jumps-to-commercial-use-via-the-cloud http://hurwitz.com/blogs/judith-balancing-act/entry/ibm-quantum-computing-jumps-to-commercial-use-via-the-cloud

IBM’s quantum computing technology, developed over decades, is ready for commercialization. It is a fundamentally different approach to computing than is used in today’s systems – and, as such, represents a watershed in computing history.

By allowing scientists and researchers to model the complexities inherent in natural phenomena and financial markets, quantum computing is a new approach to the way in which computing itself is done. It is different than Big Data analytics, which finds patterns in vast amounts of data. Rather, it will generate new types of data characterizing phenomena that couldn’t be quantified before.

What began deep in the IBM research labs in New York and Zurich is now ready to provide computing services, via the IBM Bluemix cloud.

On March 6, 2017, IBM announced its initiative to build commercially available quantum computing systems.

  • The IBM Q quantum systems and services will be delivered via the IBM Cloud platform. The core computing will be done on “qubits,” which are the quantum computing units for programming. The qubits can be orchestrated to work together; up to now, five qubits have been available to early users, and more qubits will become available in 2017.
  • IBM is releasing a new API (application programming interface) for IBM quantum computing, which will allow developers to program in widely used languages, such as Python. The resulting code will be able to access the quantum computing resources, housed in the data center of IBM’s Yorktown Heights, N.Y., research laboratory.
  • IBM is also releasing an upgraded simulator that can model circuits with up to 20 qubits. Later this year, IBM plans to release a full software development kit (SDK) on the IBM Quantum Experience that will allow programmers and users to build simple quantum applications.

 

Quantum Computing in Brief

Quantum computing is designed to generate data based on the physics principles of uncertainty. Many natural phenomena, such as the structure of molecules and medicines, can be better understood by analyzing thousands, or millions, of possibilities, or possible outcomes.

But the sheer scale of the work extends beyond the reach of classical computing used in today’s data centers and Cloud Service Providers (CSPs).

Unlike IBM Watson, which focuses on Big Data and analytics, quantum technology seeks to bring insights based on what is “not” known, rather than finding patterns in known data. Examples include: learning more about chemical bonding and molecules; creating new cryptography algorithms, and advancing machine learning. This is done through an approach called “entanglements” that explore and orchestrate large numbers of potential outcomes – and moving the data results at through new types of high-speed communications links.

 

How It Works

Based on a technology that requires super-cooling at less than one degree Kelvin (a measure on the Kelvin temperature scale), IBM’s quantum computing marries five key elements: a new type of semiconductor processor built with silicon-based superconductors; on-chip nanotechnology; deep cooling containers that house the computer; programming with microwaves – and delivery via the cloud to end-users.  

The reason for the super-cooling is that quantum computing compares “quantum states” that are ever-changing in superconducting materials – making it impossible to pinpoint a given state as a computer “1” or a “0.” However, by leveraging extremely small gaps in electrical pulses traversing the super-cooled semiconductor surfaces, quantum computing finds the likelihood, or the probability, associated with multiple known “states” of the data – even though at any one moment the actual states of that data are in constant flux.

One quick example: It is impossible to find the exact position of all the electrons spinning inside specific molecules, preventing scientists from finding all the possible combinations of electrical bonds inside the molecule. This is important in medicine and pharmacology, where the quantum approach could extend the molecule-folding analysis that is widely used widely today in biotechnology. As a result, new medical treatments may emerge, and new approaches to drug development may be created. And, many more scenarios for research and exploration may open up with wider access to quantum computing capabilities.

 

A Bit of History

Only the core concept about quantum computing existed in 1981, when Nobel laureate Richard Feynman, the famed physics scientist, spoke at the Physics of Computation Conference, hosted by M.I.T. and IBM. (Feynman is best known for his physics work, and for discovering the design fault in the Space Shuttle’s O-Rings that caused the 1986 Challenger explosion.) During his 1981 presentation, Feynman challenged computer scientists to develop computers based on quantum physics.

By the late 1980s, this led to the creation of Josephson-Junction computers, which worked in a prototype super-cooled enclosure, but proved impractical to use in the enterprise data centers of that era. Some had considered its use in deep space – but even that proved to be “not cold enough” to achieve the quantum computing effects. But progress in quantum research continued in the 1990s and early 2000s, relating to programming code for quantum computers, deep cooling in physical data-center containers, and scaling up the quantum analysis.

Key developments in computer science itself have paved the way for the IBM quantum computing initiative. Stepping stones along the way included developing, and working with, the qubits, getting them to work together in “entanglements” to compare computing states, and improvements in coding quantum computers to interface with classical von Neumann computers based on “1s and 0s.”  

In quantum computing, the tiny superconducting Josephson Junction [electrical gaps], operating in extremely low temperature containers, find multiple possible outcomes in such fields as chemistry, astronomy and finance.

The advent of cloud allows quantum computing to take place in special environments, housed in super-cold, isolated, physical containers – while supporting end-user access from remote users worldwide. This model for accessibility changed the calculus for bringing quantum to the marketplace.

Now, the IBM Quantum Experience, as IBM is calling it, is more than an experiment, or a prototype. Rather, it is now discoverable as a new resource on the IBM Cloud, and is already being used by a select group of commercial customers, academic users, and IBM researchers.

 

Why Should Customers Care?

Certain classes of customers are likely to move into quantum computing analysis early on: Areas of interest would include finding new ways to model financial market data; discovery of new medicines and materials; optimizing supply chains and logistics, improving cloud security – and improving machine learning.

This next stage on the path to quantum computing will see collaborative projects involving programmer/developers, university researchers and computer scientists. IBM intends to build out an ecosystem around quantum computing. A number of researchers, including those at M.I.T., the University of Waterloo in Ontario, Canada, and the European Physical Society in Zurich, Switzerland, are already working with IBM on quantum computing. So far, 40,000 users have run more than 275,000 experiments on the IBM quantum computing resource. All are accessing IBM quantum computing – and IBM expects to expand the program throughout 2017.

In addition, there is an IBM Research Frontiers Institute, a consortium that looks at the business impact of new computing technologies. High-profile commercial companies that are founding members of the institute include Canon, Hitachi Metals, Honda, and Samsung – and IBM is asking other organizations to join as members.

 

Quantum Computing’s Future

The time is right for quantum computing – a new way to explore endless permutations of data about possible outcomes. It requires a different kind of technology – not the classical 1s and 0s of classical computing. It has taken decades to mature to the point where it is both accessible and programmable. It is still “early days” in quantum computing, but IBM’s moves to commercialize the technology are positive ones, now involving a wider group of partners in a new and evolving ecosystem around quantum computing.

 

 

 


Read More]]>
jean.bozman@hurwitz.com (Jean Bozman) Cloud Computing Mon, 06 Mar 2017 21:07:13 +0000
Cybersecurity: Three Paths to Better Data Integration http://hurwitz.com/blogs/judith-balancing-act/entry/cybersecurity-demands-fewer-data-silos-more-data-integration http://hurwitz.com/blogs/judith-balancing-act/entry/cybersecurity-demands-fewer-data-silos-more-data-integration

Data integration from multiple security point-products is a real problem for many enterprise customers. There are too many threat intelligence feeds – and no easy way to view all incoming data in a contextual way that facilitates interpretation, analysis, and remediation. Therefore, knocking down your organization’s in-house security software silos may become an essential element to improving your cybersecurity profile.

Cybersecurity relies on tools to identify an increasing number of threats. However, the proliferation of security point products slows down the ability of organizations to identify threats and respond. For many customers, there is too much input, from too many sources – and few ways to analyze all of it efficiently.

Here are three paths to achieve better data integration for security:

  • Frameworks. For customers, the challenge is how to achieve IT simplification to better guide their defenses against security threats. Some customers will look to software frameworks that can plug in, or integrate, data from multiple point-products. This approach works. SIEM (Security Incident/Event Management) is the most common consolidation point, but it is complex and it may require adopting standardized APIs, or agreeing to use a proprietary software framework from a single vendor.
  • Cloud services. Many customers are now looking to the cloud itself to allow customers to scale-up their security analytics, and to leverage Cloud Service Providers’ (CSP) security tools. Using that approach, CSPs gather security data, analyze it, and  flow remediation recommendation outwards to their rapidly growing customer bases.
  • Containers. A third approach is to containerize the applications and data sources. By using software-defined containers, the data is isolated, and the total “surface”area for attack is reduced. This approach was discussed at ContainerWorld 2017 in Santa Clara, CA, Feb. 21-22, 2017.

 

The bottom line: Integration of software inputs is essential to improving security in a highly networked IT environment. This data integration is critical to providing a unified view of threats facing businesses and organizations, so that they can be fully seen – and addressed by IT staff.

  

Reducing Security Silos; Integrating Data

At the RSA security conference in San Francisco, David Ulevitch, founder of OpenDNS and a vice president of Cisco’s networking group, made the argument for data integration clearly. Customers need to reduce the number of information silos carrying security data – and they need to integrate the results for a full, 360-degree view of the security threats facing their organization.

His conclusion: cloud services will provide an efficient way to deliver security data more quickly and efficiently. Otherwise, standards battles over APIs will bog down progress – even as the threat “surface” expands from 50 billion devices to hundreds of billions in the IoT world.

Many speakers, in their RSA talks and presentations, came to a similar conclusion. For example, Ret. Gen Keith Alexander, former director of the NSA, told the Cloud Security Alliance (CSA) meeting at RSA that small companies, lacking the resources of large companies, would find it hard to address the growing security threats without leveraging security cloud services from CSPs.

 

Cast the Net Wider

Here’s why building a unified view of all security inputs is essential for companies seeking to defend their security perimeter: Without it, customers would likely miss important signals of threat behavior, and would not see “patterns in the data” that would point to security vulnerabilities. Large companies can well afford to maintain large IT staff, and to host their own, customized, security dashboards. But mid-size and small companies are looking to framework software partners and cloud services partners to extend their security “net” to find security intrusions and hacking.

 

Next Steps for Security Vigilance

Acquiring all of this software for on-site monitoring would become quite expensive, especially for SMBs. Even the big companies, with their larger attack surface and deeper investments in legacy infrastructure, will need help in pulling together as many security-related inputs as possible.

The rapid growth of the security ecosystem demands that customers pay close attention – spending much time winnowing through the long list of software products and security-related cloud services. Now, they need to take the next step, by integrating the data from their portfolio of security point-products.

 

 

My colleague Chris Christiansen contributed to this blog document. For more details, see The Bozman Blog on www.hurwitz.com.

 

 


Read More]]>
jean.bozman@hurwitz.com (Jean Bozman) Security Tue, 28 Feb 2017 21:04:49 +0000
Can IBM Take Machine Learning Mainstream? http://hurwitz.com/blogs/judith-balancing-act/entry/how-ibm-is-unleashing-machine-learning http://hurwitz.com/blogs/judith-balancing-act/entry/how-ibm-is-unleashing-machine-learning

It is clear that Watson, IBM’s cognitive engine, is becoming embedded in just about every tool and application that IBM is selling these days. In its most recent announcement, IBM is now making the Watson Machine Learning (ML) engine available on the mainframe in order to bring machine learning to transactional databases.

This ML engine will be available both for on premises use and as a private Cloud implementation. IBM’s announcement had a vast array of capabilities too numerous to mention in a single blog, I would like to mention some of the aspects that I found especially important.

The following are my five top take aways.

  1. While Watson has commonly known to support unstructured data in Natural Language Processing (NLP) solutions, the underlying engine provides sophisticated machine learning engine. This machine learning engine is applicable to advanced analytics on structured data as well as unstructured data.
  2. Mainframe transactional databases are the mainstay of many of IBM’s large financial services, retail, and airline customers. Given the complexity, value and the scale of this data it makes sense to provide advanced analytics based on machine learning to support the need to better understand this data. Being able to detect patterns and anomalies in this data provides a valuable tool for customers. Equally important is the ability to execute these advanced algorithms close to the data rather than moving the data to an external platform.
  3. One of the interesting capabilities of the machine learning engine is its ability to provide productivity assistance to the developers. Experienced data scientists are expert at building models and selecting the right algorithms. However, there are simply not enough data scientists. The benefit of being able to provide a cognitive assistance to help a less experienced developer take advantage of machine learning is extremely important. IBM’s Machine Learning engine provides this capability. In essence, once a model is built, the system learns from the ingested data and recommends an algorithm that best matches the task. Once the algorithm is trained on the data, the system may suggest an alternative algorithm. Being able to provide the developer with help in selecting the most effective algorithm, or part of an algorithm, will make machine learning much more approachable. Through Cognitive Assist for Data Science (CADS), the system sends the testing data to all the 200 algorithms and starts calculating to determine which algorithm or combinations of algorithms provides the highest score and reliability.
  4. Flexibility and productivity are important aspects of IBM’s announcement. First, the engine enables developers to use the tools they are already familiar with and have made investments in. For example, developers can take advantage of the 55 SPSS algorithms. This is especially important for the large number of organizations that have used SPSS for years. In addition, the ML engine supports many of the languages widely used for machine learning including R, Java, Scala, and Python. Developers have a choice of execution engines including Hadoop and Spark.
  5. Improving on the user experience is another aspect of the ML engine. IBM has invested in creating a dashboard interface called the visual model builder that assists developers in building models – one of the most complex aspects of machine learning.

Conclusion

It is not a surprise that IBM would make its ML engine available first on the System z. Customers are reluctant to move their crown jewels of data off the mainframe onto other platforms – yet the requirement to bring advanced analytics to this core data is going more urgent. IBM plans to bring this same engine to the Power System in the near future. I liked the pragmatism of this approach to applying cognitive computing and machine learning to complex transactional data. Combine this will the ability to reuse SPSS, open source tools, languages, and algorithms should make this offering attractive. In addition, the ability to combine analytics on a combination of structured and unstructured data will be important element that will have important advantages for customers.


Read More]]>
judith.hurwitz@hurwitz.com (Judith Hurwitz) IBM Mon, 27 Feb 2017 20:14:17 +0000
Oracle Makes a Business Case for Cloud Computing http://hurwitz.com/blogs/judith-balancing-act/entry/oracle-makes-a-business-case-for-cloud-computing http://hurwitz.com/blogs/judith-balancing-act/entry/oracle-makes-a-business-case-for-cloud-computing

 

Oracle is stepping up its move to the cloud, positioning the Oracle Cloud public cloud as an engine for growing its overall enterprise business.

As more companies move workloads into hybrid clouds, and competition intensifies, Oracle has decided to play to its strengths in security, availability and workload performance. It has taken its own approach by leveraging its engineered systems, running in Oracle Cloud data centers – or in customer sites – as a differentiator in competing with other CSPs. 

Oracle knows it is competing with other cloud services that got into the market much earlier than it did: Amazon Web Services and Microsoft Azure. It also knows that Google Cloud Platform and IBM SoftLayer are working to grow share in the enterprise-focused hybrid cloud space. However, Oracle believes that large-scale migration of enterprise workloads is still in its early stages, giving it a large opportunity among customers planning to move enterprise workloads and business applications to public cloud providers.

In this competitive environment, Oracle is going directly to big customers, worldwide, with its Oracle CloudWorld events, recently held in New York and Seoul. It is positioning its deep software portfolio of Oracle databases, middleware and enterprise applications software as cloud service differentiators. In 2016, other CloudWorld events were held in China, India and Mexico.

What’s New

Oracle phased in its move to the cloud – working first on SaaS, and PaaS, before introducing a set of IaaS services in 2016. Its cloud revenue is growing, as it reported in its quarterly financials. Now, Oracle is still adding to its cloud services portfolio: As announced in January, Oracle introduced bare-metal-as-a-service, so that customers can run workloads on the Oracle Cloud service, in place of on-premises hardware systems. It is also expanding Oracle Cloud capacity by adding three more data centers – in Virginia, London and Turkey. That will bring the total number of Oracle Cloud data centers worldwide to 25, covering all time zones and major geographies.

Oracle’s focus on enterprise feature-sets is positioned to pay off in hybrid cloud and public cloud, as it works to grow share in the rapidly expanding cloud services marketplace. Oracle’s cloud strategy is maintenance of a consistent computing environment, with the same Oracle stack running on-premises, at customer sites, and inside the Oracle Cloud. Oracle calls this an “integrated cloud” stack engineered to work on-prem or off-prem.

 

Oracle CloudWorld in New York

At New York’s CloudWorld event on Jan. 17, Oracle CEO Mark Hurd made the business case for CXOs adopting Oracle Cloud. Beyond the technology, he noted, business managers want to learn more about the business value of adopting cloud services. Cloud adoption supports IT simplification and workload consolidation, as enterprise datacenters are combined, and workloads migrate, either on-premises or off-premises. Oracle intends to play in both spaces, using the same Oracle software stack – while leveraging Oracle engineered systems in the Oracle Cloud.

Customers speaking the New York CloudWorld event included New York City’s CTO – and IT executives from MetLife, Thomson Reuters, ClubCorp and Grant Thornton, among others. They cited the flexibility they have with cloud services, to test new applications, to deploy more instances quickly, and to pay for capacity, as it is used.

 

Making the Business Case for the Oracle Cloud

The depth of Oracle’s commitment to cloud computing can be seen in Oracle’s investment levels in cloud technology, its re-write of Oracle Fusion applications for cloud-based workloads and its deep applications portfolio. All of that says that Oracle will be a long-run provider in the hybrid cloud market – and that it plans to replicate its earlier successes in enterprise database and enterprise applications with enterprise plays in the cloud computing marketplace.

The leading arguments in Oracle’s enterprise case for the cloud include:

  • Data-centric focus on cloud computing, starting with hybrid clouds linking on-prem and off-prem systems. Data migration can, and often does, end up with entire applications running on the Oracle Cloud. Oracle’s offering include many analytics and data-management capabilities.
  • Leveraging Oracle engineered systems for on-prem or off-prem (with OracleCloud public cloud) use. This allows Oracle to manage the systems in the same way, regardless of location within a hybrid cloud.
  • Leveraging Oracle enterprise applications. Oracle has a portfolio of hundreds of cloud-based and on-premises enterprise applications, such as ERP, HCM, SCM and others. Oracle acquired dozens of companies over the last 20 years – including PeopleSoft, J.D. Edwards and others, growing its inventory of Oracle Applications. It then spent nearly 10 years re-engineering them into the Oracle Fusion Applications, for both on-premises and SaaS-based deployment, based on Java-enabled code, to link apps together and support unified management of applications.

 

Business-Centric Services

In terms of Oracle’s software products and services, there is a multi-faceted strategy to position Oracle as a unifying element of hybrid cloud deployments. Oracle’s positioning includes the following:

  • Positioning a broad inventory of packaged business applications, including Oracle’s on-premises E-Business Suite, as well as applications acquired by Oracle from enterprise software companies (e.g. PeopleSoft, J.D. Edwards) – and now adapted for hybrid cloud.
  • Appealing to longtime Oracle Database customers who plan to move at least some of their mission-critical workloads and data to hybrid cloud.
  • Outreach to new customers, including cloud-native developers and enterprise developers, and working to gain competitive wins in IT organizations that have multi-vendor data centers that run Oracle.

 

Summary

Cloud adoption is accelerating in many organizations, across the board. For Oracle, it will be important to extend its reach to new audiences beyond the Oracle installed base, emphasizing its enterprise-centric delivery for business databases and business applications with OracleCloud. We expect Oracle to continue its outreach to new cloud-services customers, through Oracle CloudWorld Events, Oracle OpenWorld and through direct and partner sales efforts, throughout 2017.

 

 

 


Read More]]>
jean.bozman@hurwitz.com (Jean Bozman) Cloud Computing Tue, 07 Feb 2017 16:03:07 +0000