Latest blog entries http://hurwitz.com/blogs/judith-balancing-act/latest Mon, 21 Aug 2017 23:45:17 +0000 Joomla! - Open Source Content Management en-gb Are Technology Shakeups in Store for 2015? http://hurwitz.com/blogs/judith-balancing-act/entry/are-technology-shakeups-in-store-for-2015 http://hurwitz.com/blogs/judith-balancing-act/entry/are-technology-shakeups-in-store-for-2015

Before I start with my predictions, let me explain what I mean by a prediction. I believe that predictions should not be about the end of a technology cycle but the timing for when a issue begins to gain traction that will result in industry shifts.  As I pointed out in my book, Smart or Lucky? How Technology Leaders Turn Change Into Success (Josey Bass, 2011), important industry initiatives and changes usually require decades of trial and error before they result in significant product and important trends.  So, in my predictions, I am pointing out changes that are starting.

a know that the rule is that you need to come up with ten predictions when a new year is about to start. But I decided to break the rule and stick with seven. Call me a renegade. I think that we have a very interesting year taking shape. It will be a year where emerging technologies will move out of strategy and planning into execution. So, I expect that 2015 will not be business as usual. There will be political shakeups in both IT and business leadership as technology takes on an increasingly more strategic role. Companies need to know that the technology initiatives that are driving revenue are secure, scalable, predictable, and manageable. While there will always be new emerging technologies that we take us all by surprise, here is what I expect to drive technology execution and buying plans in the coming year.

1. Hybrid cloud Management will become the leading issue for businesses as they rely on hybrid cloud.

It is clear that companies are using a variety of deployment models for computing. Companies are using SaaS, which obviously is a public cloud-based service. They are using public cloud services to build and sometimes deploy new applications and for additional compute and storage capacity. However, they are also implement private cloud services based on the level of security and governance. When cloud services become commercial offerings for partners and customers, economics favors a private cloud. While having a combination of public and private is pragmatic, this environment will only work with a strong hybrid cloud management service that balances workload across these deployment models and manages how and when various services are used.

2. Internet of Things (IoT) will be dominated by security, performance, and analytics. New players will emerge in droves.

Internet of Things will be coming on strong, since it is now possible to store and analyze data coming from sensors on everything from cars to manufacturing systems and health monitoring devices. Managing security, governance and overall performance of these environments will determine the success or failure of this market. Businesses will have to protect themselves against catastrophic failure – especially when IoT is used to manage real time processes such as traffic management, sensors used in monitoring healthcare, and power management. There will be hundreds of startups. The most successful ones will focus on security, management, and data integration within IoT environments.

3.Digital Marketing disillusion sets in - it is not a substitute for good customer management.

Many marketing departments are heavily investing in digital marketing tools. Now corporate management wants to understand the return on investment. The results are mixed. First, companies are discovering that if they do not improve their customer care processes along with digital marketing software and processes, digital marketing is useless. In fact, it may actually make customer satisfaction worse since customers will be contacted through digital marketing services but will not get better results. This will result in a backlash. Unfortunately, it may be the messenger who is blamed rather than the culprit – poor customer care.

4.Cognitive computing will gain steam as best way to capitalize on knowledge for competitive advantage.

The next frontier in competitive differentiation is how knowledge is managed. The new generation of cognitive solutions will help companies gain control of their unstructured data in order to create solutions that learn. Expect to see hundreds of start ups emerging that combine unstructured data management with machine learning and statistical methods, advanced analytics, data visualization, and Natural Language Processing

5.IT will gain control of brokering and managing cloud services to ensure security and governance.

For the past five years or more business units have been buying their own public cloud compute and storage services, bypassing the IT organization. Many of these organizations were frustrated with the inability of IT to move fast enough to meet their demands for service. When these departments were experimenting with cloud services, expenses could easily been hidden in discretionary accounts. However, these public cloud services move from pilot and experimentation to business applications and services. There are implications for cost, governance and management. As often happens when emerging technology becomes mainstream, IT is being asked to become the broker for hybrid cloud services.

6. Containerization and well designed APIs are becoming the de facto method for creating cross platform services in hybrid computing environment.

One of the benefits of a services architecture is that it is possible to truly begin to link computing elements together without regard to platform or operating system. The maturation of container technology and well-designed APIs are going to be a major game changer in 2015. These issues of containers and APIs are linked together because they are focused on abstraction of services and complexity. These abstractions are an important step towards moving from building applications to linking services together.

7.Data connectivity combined with business process emerging as biggest headache and opportunity in hybrid.

Data connectivity and business process issues are not a new problem for businesses. However, there is a subtle change will major ramifications. Because business units tend to control their own data both on premises and in SaaS applications, it is increasingly difficult for business leadership to create a unified view of data across a variety of business units. Without being able to bring data and process across silos puts businesses at risk. This complexity will emerge as a major challenge for IT organizations in 2015.


Read More]]>
judith.hurwitz@hurwitz.com (Judith Hurwitz) Vendor Strategy Tue, 16 Dec 2014 17:24:48 +0000
IBM Leverages Pervasive Encryption to Grow the IBM Z Mainframe Customer Base in a Hybrid Cloud World http://hurwitz.com/blogs/judith-balancing-act/entry/ibm-leverages-pervasive-encryption-and-data-protection-to-grow-the-ibm-z-mainframe-customer-base-in-a-hybrid-cloud-world http://hurwitz.com/blogs/judith-balancing-act/entry/ibm-leverages-pervasive-encryption-and-data-protection-to-grow-the-ibm-z-mainframe-customer-base-in-a-hybrid-cloud-world

IBM is introducing a major strategy to transform its foundation enterprise platform, the mainframe – now called IBM Z – into a new cornerstone for hybrid cloud IT.  

The new IBM z14 machine brings with it hardware and software enhancements intended to generate more mainframe installations in hybrid clouds by making it easier to link enterprise data centers with multi-cloud off-premises deployments.

Because hybrid cloud is rapidly emerging as the deployment goal for large and mid-size organizations, this new positioning and marketing is a mechanism to enhance the IBM mainframe’s role in next-generation hybrid clouds.

The key tenets for this IBM strategy involve deep security, data protection, extensive analytics – and wide support for blockchain open ledger solutions. New features, such as pervasive encryption, secure containers, and enhanced open-source development, will be packaged with new software pricing and multi-cloud support.

The IBM z14 system positions the IBM Z’s role as an important platform for data migration and the protection of transactional data, associated with real-time analysis and machine learning (ML) leveraging that data.

Key to this strategy is that secure transit for important data, protected by secure containers, will be packaged along with pervasive encryption on the hardware system itself. IBM will retain the positioning of z platforms as scalable, resilient, and highly secure, but will emphasize its role as a platform with open connections to multi-cloud, distributed networks.

This could be a turning point for the public perception of IBM mainframes: After years of positioning “centralized” vs. “distributed” computing, IBM now sees that a “both” positioning could help it grow IBM mainframe presence in fast-growing hybrid cloud deployments spanning scale-up and scale-out infrastructure. This approach has the potential to expand the zSystems installed base more rapidly, and has company-wide implications for IBM’s profitable, recurring mainframe revenue.

 

Going Forward

On July 17, 2017, IBM announced that the new IBM zSystems platform, the z14, has the following:

  • Pervasive Encryption: IBM has said it will encrypt all data, leveraging on-chip encryption/de-encryption built into the IBM Z’s 10-core processors. This addresses high levels of customer concern about cyberattack vulnerability. Pervasive encryption will not require changes to applications, databases and other workloads running on the IBM Z, and it will not require an overhead tax related to system performance. Given that security is often listed as the top priority for IT organizations, the pervasive encryption features should attract wider consideration outside the traditional IBM mainframe customer base. This positions IBM Z as an important hub for data transiting the system, or for data being sent to the system via data migration and containers.
  • Extended Support for Open and Connected Interfaces: Expanded support for open application programming interfaces (APIs) will extend IBM mainframe capabilities for security and analytics to a wider pool of developers and DevOps personnel. Customers have seen IBM build on open interfaces before – especially in the Linux environments running on IBM mainframe platforms. Here, IBM is confirming the open development direction for customers and ISV developers, in recognition of open-systems momentum for AppDev and DevOps, and as a pragmatic assessment of next-generation development for its mainframe platform.
  • Support of Deep Analytics and Machine Learning:Addressing large data pools, or data lakes through the use of multiple IBM Z hubs is designed to reduce large data transfers to centralized data centers for deep analytics processing. Machine learning will automate the process, speeding time-to-results for business organizations looking to Big Data coming from structured and unstructured data resources. Many customers are in the early stages of designing and implementing a software-defined infrastructure strategy, for which data migration and data locality will play important roles in determining efficient placement of data for rapid processing. This gives IBM time to work with customers and system integrators to implement SDI environments, which tend to be built out in phases, over time.
  • Added Support for Open Development: A new focus on Secure Containers, Open Languages, and microservices aims at attracting more open-source development for IBM mainframes. New features, such as enhanced visualization of code, more software tools and plug-ins, build on previous features for ease-of-use, open-source programming and multi-cloud support. IBM can be expected to market these through hands-on workshops in key cities, such as San Francisco and New York, which are highly visible centers for customer cloud development projects.
  • New Software Pricing for Hybrid Cloud: IBM is providing new pricing terms for zSystems software, responding to longtime customer concerns about pricing and maintenance fees that have impacted buying and leasing decisions. This pricing is responsive to cloud-computing’s pay-as-you-go pricing models. It will take time to see how the customer base responds to these new pricing models, and how they will relate to patterns of IBM mainframe software-related sales, which impact IBM and longtime zSystems ISV partners, like Computer Associates, Compuware and BMC. By introducing pricing for secure containers, IBM is moving to a new, and evolving, model for its mainframe software pricing that is absolutely essential to growing the mainframe customer base as the developer community sees demographic shifts to Millenials and young professionals. 

 

Timing of the Announcement

The July news can be expected to promote sales and leasing in the fourth quarter of calendar 2017 – traditionally the strongest quarter in IBM’s fiscal year, which ends on Dec. 31, 2017. General availability for IBM z14 platforms will begin in September. From a revenue-generating perspective, the IBM mainframe has long been an important foundation for IBM’s systems business, especially in view of reduced volume shipments for IBM Power Systems in recent years – and the acquisition of IBM’s Systems x x86 server business by Lenovo in 2015.

Budgetary concerns at most companies have led to close scrutiny for high levels of IT spend. This is why the software pricing changes are so vital to the IBM z14 product rollout. If business organizations agree that pervasive encryption will help them to protect against cyber-attacks, while supporting hybrid cloud expansion, then the z14 launch could lead to more IBM systems growth in 2017 and 2018.

 

Planning to Expand the IBM Mainframe Customer Base

IBM zSystems have seen modest growth in recent years, building on a worldwide installed base that is adding net-new units in more geographies and more cloud-computing scenarios. Growth in EMEA (e.g. Middle East, Africa and Eastern Europe), regional China and multi-cloud access have helped growth, as has introduction of all-Linux LinuxONE zSystems in recent years.

Now, with the z14 platform, IBM plans to accelerate that growth by placing more mainframes, more quickly, into new and emerging hybrid cloud scenarios. Secure and resilient support for important IBM workloads, including IBM DB2 databases, transactional applications (e.g. CICS) and IBM-compatible ERP applications could drive more business to IBM cloud centers and IBM z14 systems. However, this won't happen automatically. IBM must be clear in communicating the technical and business-model changes it has made with the IBM Z platform.

This strategy plays to IBM’s enterprise data-center strengths, even as it looks to expand IBM z14 placements for customers’ cloud workloads requiring more security and availability than has been widely available on other cloud platforms. It is designed to appeal to extended enterprises that are building out hybrid cloud infrastructure. This is where the DevOps software tools and open APIs will play an important role in attracting more types of workloads to the IBM Z platforms.

Equally important for IBM will be marketing the pervasive encryption solution to two specific groups – cloud service providers and telcos –  both of which could leverage secure data technology – and deliver secure data services to end-customers without requiring those end-customers to have zSystems expertise. This suggests that IBM could add a focused go-to-market (GTM) campaign with CSPs, hosters, telcos and software partners aimed at facilitating IBM Z use in hosting and cloud data centers. IBM has not formally announced that GTM program, but it would be a logical extension of the drive to host more IBM mainframes within the IBM Cloud public cloud – and customers' hybrid clouds.


Read More]]>
jean.bozman@hurwitz.com (Jean Bozman) Cloud Computing Mon, 17 Jul 2017 20:51:48 +0000
OpenStack Takes a SnapShot of Its Customer Usage Patterns http://hurwitz.com/blogs/judith-balancing-act/entry/openstack-takes-a-snapshot-of-its-customer-usage-patterns http://hurwitz.com/blogs/judith-balancing-act/entry/openstack-takes-a-snapshot-of-its-customer-usage-patterns

 

Leaders of the OpenStack Foundation presented a profile of OpenStack adoption at the semi-annual Summit in Boston (May 8-10), based on a survey of 1,300 OpenStack developers and users in 78 countries. The Foundation’s key objective: Making clear that production deployments are growing, building on projects begun since 2015, even as some users are testing proof-of-concept (PoC) projects.

This blog will show key data points presented in the survey, looking at OpenStack’s challenges and the path forward for customer adoption and consumption models.

The OpenStack Foundation’s survey found that OpenStack software is having an impact in datacenters worldwide, showing 44% growth in production deployments year-over-year. In all, more than 5 million cores are in production worldwide, based on OpenStack data. More details can be found on the OpenStack website, which posts analytics of the survey data.  

Many of the earliest OpenStack users are large companies like AT&T, eBay, General Electric, the U.S. Army, and Verizon. Organizations like these have large and sophisticated IT staffs that have been working with OpenStack software for most of the seven years it’s been generally available to build cloud stacks.

However, the OpenStack survey showed growth in deployments among SMB organizations. More than 30% of OpenStack users work in companies with 10,000 or more employees, according to the survey – and 25% work in companies with fewer than 100 employees. The rest – 45% of respondents – include midsize companies (100-1,000 employees) and large ones (1,000-10,000 employees).

 

Paths to OpenStack Adoption

Customers have climbed a learning curve with OpenStack – and many are gaining operational benefits from applying OpenStack to hybrid cloud strategies, running on-prem and off-prem workloads. The next step is to extend into multi-cloud deployments, making interoperability and ease-of-use important criteria.

These customers are building cloud infrastructure based on the top OpenStack services, including the Keystone identity service, Nova compute service, Neutron networking service, Horizon dashboard service and Cinder block-storage service.

However, many organizations are just starting with OpenStack, or electing to go with single-vendor cloud stacks supporting OpenStack technology. These deployment patterns are often based on long-term relationships with vendors that build branded cloud infrastructure software – and those stacks may include OpenStack building-blocks.

 

Key Challenges

Key challenges for OpenStack adoption include:

  • Reducing the complexity and number of emerging OpenStack projects, in terms of tracking new projects and reducing minor differences between similar OpenStack community projects.
  • Increasing ease of use – making it easier for first-time users to adopt the technology, and to integrate it into existing infrastructure. This is aimed at faster adoption and operational efficiency in customer sites.
  • Ensuring interoperability for application development tools. Getting DevOps buy-in through OpenStack support for widely used APIs and software tools is important for wider adoption.
  • Supporting deployments for OpenStack services that span multiple clouds – whether public, private or hybrid. This is why the OpenStack Foundation is working more closely with large cloud service providers (Amazon Web Services, Microsoft Azure and Google Cloud Platform).

 

Many Consumption Models

To speed OpenStack adoption, and to address the technology challenges of working with multiple OpenStack building-blocks (e.g., Nova, Cinder), OpenStack adoption takes many forms. Mark Collier, COO of the OpenStack Foundation, said in his keynote that OpenStack is composable, open infrastructure, allowing customers to “consume different pieces of OpenStack and [to] combine them in useful ways with other open source technologies.”

Here are some of the ways OpenStack will be consumed by customers, depending on their experience levels, the time/cost of implementing OpenStack in data centers –and their budgets:

 

Vendor Solutions

It’s already true that many organizations are electing to go with mixed-vendor or single-vendor cloud stacks that leverage OpenStack services. These adoption patterns are often based on long-term relationships with vendors that build branded cloud infrastructure – and they include OpenStack building-blocks.

For example, Red Hat, SUSE and Ubuntu were all present at the Boston conference, offering to work with customers to build cloud solutions based on OpenStack technologies – and to provide service/support for OpenStack deployments.

By engaging with named vendors, much of the work of building cloud services on OpenStack software layers is being done by others – requiring fewer IT personnel to build applications or microservices. These adoption patterns are often based on long-term relationships with vendors that provide branded cloud infrastructure – and they include OpenStack building-blocks.

Remote management for private clouds is another type of solution. OpenStack pointed to remotely managed private clouds, an emerging type of solution that is delivered via cloud service partners.

 

OpenStack + Containers

The move in DevOps to containers has brought OpenStack support for Kubernetes orchestration software and the Docker container software. Mix-and-match deployments reflect commercial realities. For example, Red Hat sells its OpenShift Container software product, which works with Docker, Kubernetes, and other types of open-source software. Other ISVs, including SUSE and Ubuntu, and variety of software app-dev tools providers, support combined software solutions for Kubernetes orchestration and Docker containers.

 

Custom Solutions

Custom solutions based on OpenStack technology are still a consumption model, especially among early adopters and large-scale IT organizations. Companies like AT&T, T-Mobile and Verizon – all telecoms suppliers – have been leveraging OpenStack to build cloud services from the data center to the “edge” of the network. These projects coincide with telco’s move to 5G wireless technology. Other companies speaking at the conference, like GE Digital HealthCare, described data center footprint savings and greater efficiency associated with their adoption of cloud computing and OpenStack-based services replacing legacy applications.

 

Survey Results

In the 2017 survey, typical customers run up to nine OpenStack services in their infrastructure, with 16% running 12 or more services. The scope of deployment is truly global: More than 85% of all clouds use some OpenStack services, and some customers have OpenStack running in up to 80% of their cloud infrastructure.

The open-source Kubernetes orchestration software is far and away the leading “application” running in OpenStack deployments (See Chart I, below).

As one customer example shows, the driver for cloud migration is efficiency and contained costs. Patrick Weeks, senior director of digital operations at GE Digital Healthcare, said OpenStack is key to moving more workloads to the cloud. Since working with OpenStack technologies, 530 applications were migrated to cloud, and 608 applications were retired; in all, 42% of all of the company’s applications are running in the cloud. Using OpenStack, the GE unit achieved annual savings of $30 million, and reduced its on-prem “footprint” for systems by nearly 50%.

 

Chart I: Top Applications for OpenStack*

45% run Kubernetes

18% run OpenShift

18% run Cloud Foundry

17% run Custom/Build your own

14% run Mesos

14% run Docker Swarm

17% run other infrastructure applications

(*Open Answer Question, multiple responses per respondent)

Source: OpenStack Survey Data, 2017, n=1,300

 

Where Does OpenStack Go From Here?

Clearly, OpenStack will remain a force in cloud computing for many years to come. Support from large vendors – and wide geographic coverage in the Americas, EMEA and Asia/Pacific – say it will retain an important role in hyperscale and enterprise cloud computing.

Its technology will become an integral part of many cloud solutions – including packaged and custom code – that will make their way into the rapid expansion of cloud services worldwide. In many cases, the technology will be deployed with software from vendors, ISVs, cloud service providers (CSPs) and channel partners.

Yet, OpenStack’s importance in the marketplace is still being determined. As in earlier eras, the world will look to consistent benefits – and continuing growth in deployments – to make the case for OpenStack in public, private and hybrid clouds.

 


Read More]]>
jean.bozman@hurwitz.com (Jean Bozman) Cloud Computing Fri, 26 May 2017 13:53:31 +0000
Red Hat Summit Focuses on the Business Impact of Customers' Hybrid Cloud Migrations http://hurwitz.com/blogs/judith-balancing-act/entry/red-hat-summit-2017-showing-the-business-impact-of-hybrid-cloud-migration http://hurwitz.com/blogs/judith-balancing-act/entry/red-hat-summit-2017-showing-the-business-impact-of-hybrid-cloud-migration

 

 

Mindful that the move to hybrid cloud is accelerating – and that business priorities are increasingly driving IT buying decisions – Red Hat executives at Red Hat Summit 2017 highlighted the business impact of adopting their portfolio of open-source software products.

The focus at this year’s Summit: Providing developers and IT operators with tools that reduce the number of individual steps needed to complete many repetitive tasks. Integration of functionality, and unified consoles for monitoring and management are intended to reduce operational costs for the business. This approach addresses the “Dev” and the “Ops” personnel within an IT organization – both of which impact overall operational efficiency.

Security, availability, and consistency – the need to address all of these brings home the point that hybrid clouds must extend the reliability of enterprise IT into the cloud-computing world. Bringing those attributes to cloud development and deployment tools is a high priority for Red Hat, which is already widely used in enterprise and cloud environments.

 

Patterns of Adoption Are Changing

Hybrid cloud adoption, linking on-premises enterprise IT with off-premises public clouds, is becoming widely accepted among IT and business managers. This push into the cloud has been attributed to the process of digital transformation to change their business model, and to compete more effectively. Businesses want to gain IT flexibility and business agility, as their industries (e.g., retail, financial services, healthcare) cope with dramatic change.

Business managers are highly influential in making new technology decisions, requiring their buy-in to technology adoption. In large businesses, their approval is absolutely essential to hybrid-cloud planning and deployments. In small/medium business (SMB), the decision to push more workloads to public cloud providers is often driven by cost and operational priorities.

Without support from business managers, IT organizations will find it increasingly difficult to secure the funding for next-generation systems and software. Enterprise customers describing their Red Hat deployments at the Summit conference included the Disney/Pixar animation studio, the Amadeus airline reservation system, and Amsterdam’s Schipol Airport.

 

Products Aimed at Dev and Ops

At the Summit, many of Red Hat’s announcements focused on simplification and ease of use for two main roles within IT organizations: software developers and IT operations personnel. Integration of functionality, and unified consoles for monitoring and management are intended to reduce operational costs for the business.

With this approach, developers and operations personnel can each focus on their primary tasks, working more effectively, as IT silos are removed from the infrastructure, and workloads move to available systems and storage resources. One key example: The encapsulation of applications in Linux containers has the practical effect of separating application development from infrastructure deployment, via abstraction. By leveraging containers, IT organizations can move applications through the dev/test/production pipeline more quickly.

Here are three product categories addressed by Red Hat announcements at the Summit: 

  • Containers. The Red Hat OpenShift Container Platform 3.5 allows OpenShift containers, based on Red Hat Enterprise Linux (RHEL), to work with open-source Kubernetes orchestration software and Docker. The Linux containers provide a runtime for applications code inside containers that also leverages technology from other widely adopted open-source projects.
  • OpenShift.io. The platform combines the features of several widely adopted development tools, helps teams manage work items through the development process, and prompts developers with code options as they build new cloud-ready applications. OpenShift.io uses browser-based IDE and Eclipse Che.
  • Ansible. Red Hat is extending Ansible, Red Hat’s agentless automation tool that provides automation services to the full portfolio of Red Hat solutions.Red Hat acquired Ansibleand its technology in 2015. Ansible is now integrated with Red Hat CloudForms cloud management platform, Red Hat Insights proactive analytics, and Red Hat OpenShift Container Platform.

The greatest opportunities for growth, according to Red Hat executives, are in app/dev collaboration tools, middleware and cloud management software. The $2.5 billion company plans to accelerate its top-line revenue growth by leveraging its partnerships with hardware systems, software, services and cloud service providers (CSPs).

 

Partnering with Cloud Service Providers (CSPs)

As it grows its ecosystem, Red Hat in deepening its partnerships with cloud service providers (CSPs) as customers. Certainly, many enterprise applications will remain on-premises -- inside the firewalls of data centers -- due to security and data governance concerns. However, the adoption of cloud computing is increasing, with more enterprise workloads migrating to CSPs, including Amazon Web Services, Microsoft Azure and Google Cloud Platform (GCP).

Red Hat announced a strategic alliance with Amazon Web Services (AWS). Red Hat wants to tap the deep reservoir of AWS developers as it grows its sales of OpenShift tools, JBoss middleware, and RHEV virtualization. Through this alliance, Red Hat will natively integrate access to AWS services into the Red Hat OpenShift Container platform. This gives hybrid cloud developers a new way to gain direct access to AWS cloud services, including Amazon Aurora, Amazon EMR, Amazon Redshift, Amazon CloudFront, and Elastic Load Balancing.

Other CSP relationships are important, because many customers are moving to multi-cloud strategies. Red Hat is working with Google Cloud Platform (GCP) on open-source projects, including the ongoing development of Kubernetes orchestration software. Linux containers support multiple programming languages, and provide a runtime environment for applications built on microservices. This allows them to scale up by scaling “out” in a style originally developed by CSPs for hyperscale applications.

Microsoft Azure and Red Hat delivered at least two joint Summit presentations in Boston, showing their increasing co-presence in the cloud computing world. It is tangible evidence of the way that the cloud has evolved, with Linux and Windows workloads running side-by-side, both on-premises and off-premises, on the Microsoft Azure cloud.

 

Business Objectives and Enterprise Clouds

Red Hat sees its future opportunity in addressing containers and micro-services for end-to-end application development for hybrid cloud. Another focus for the company is improving cloud management for customers that are migrating more business logic into the cloud. Red Hat plans to build on the work it has done with early adopters of hybrid cloud -- and to make it easier for new customers and prospects to consider migrating more business workloads to hybrid clouds.

Specifically, three key ingredients for expanding Red Hat's total available market are: developing container technology, building on DevOps toolsets and automating cloud management. That is why the business marketing messages are so important for the opportunity that Red Hat is embracing as it plans to grow its top-line revenue in 2017.


Read More]]>
jean.bozman@hurwitz.com (Jean Bozman) Cloud Computing Tue, 16 May 2017 22:32:21 +0000
A tribute to Marcia Kaufman: A Woman of Valor http://hurwitz.com/blogs/judith-balancing-act/entry/a-tribute-to-marcia-kaufman-a-woman-of-valor http://hurwitz.com/blogs/judith-balancing-act/entry/a-tribute-to-marcia-kaufman-a-woman-of-valor

Marcia and I would always tell people that we met in high school when we were both 15 years old. But that doesn’t tell the story of a friendship and a business partnership that began in 2003. I was an entrepreneur at a crossroads of my career. I had lost the company I had started in 1992 and walked away from a company that I started in 2002. I wasn’t sure what I wanted to do next and I will admit that I was afraid that I would fail. Then one day I got a phone call from Marcia. We had been in touch off and on over the years. Marcia too was at a crossroads. She had recently left a job as an industry analyst and was also trying to decide what to do next.

She said to me, “If you are thinking about starting another company I am interested.” To be truthful, I wasn’t sure that I was ready for what I knew would be difficult. But I agreed that we should meet for coffee. Despite my misgivings, and perhaps because of Marcia’s infectious optimism, I decided that it made sense to give it a try.

And I was right. It was hard. In the beginning we struggled to find projects and position our very tiny company. We taught each other a lot about working as a team, about technology, and about having fun while working hard. I could tell you hundreds of stories about our adventure over 13 years. There was the time that we worked all Christmas day so that we could finish a research paper. I could tell you about the one time that Marcia yelled at the top of her lungs at a freelance researcher who was working with us on a project. It was unusual because Marcia could get along with everyone – except this one very annoying writer. I could tell you about all the times that we would meet at the airport to go to conferences in Las Vegas. While I dreaded going to Vegas, it energized her. She made these trips fun. She loved the concerts that took place during the shows. She loved to dance and sing. I often left early to get some sleep. But Marcia never seemed to tire and stayed long after I went to bed.

I would often push us to take on new projects, such as the many books we wrote together. She would look at me as though I was crazy (which I probably was) but she would never say no. Even when she was sick, we worked together on the hardest writing project we ever undertook – cognitive computing and big data analytics. It was a wonderful book and a testament to Marcia’s brilliance and perseverance.

Over the last three years, it became harder and harder for Marcia to work. This made her angry, because she loved researching, learning, and writing. Over the years, she became a master writer. She was widely respected and deeply loved. She would tell me in moments when the two of us were together how very sick she was. She was quite aware of her condition, but she continued to work. When she couldn’t come into the office, she would work at home. Her doctor was shocked that she was still working. In fact, I remember Marcia telling me that her doctor expected her to stop working and just take care of herself. She continued to work until the disease finally made it impossible. With every setback she would first say to me, ‘”Oh, Judith, I just got such bad news. And in the next sentence she would tell me, “…but I am going to beat it.” In fact, the last time I visited Marcia when she was in rehab two weeks before she died, she told me that she had come to accept what was happening to her. But in characteristic Marcia fashion, her next words were, ‘”But I am still going to fight.”

This is at least the 6th draft of this note I that I have written, trying to capture the Marcia knew I loved Marcia as a friend and colleague. I miss her strength, her honesty, her intensity, her kindness, her elegance, and her love of life. I don’t think that I will ever meet another person like Marcia. She held onto life with such fervor.

As brokenhearted as I am, I know that Marcia lived life as fully as anyone I have ever known. I will miss you forever and you will always be in my heart.


Read More]]>
judith.hurwitz@hurwitz.com (Judith Hurwitz) Uncategorized Sun, 26 Mar 2017 14:31:47 +0000
Redfish Emerges as an Interoperability Standard for SDI http://hurwitz.com/blogs/judith-balancing-act/entry/redfish-emerges-as-an-interoperability-standard-for-sdi http://hurwitz.com/blogs/judith-balancing-act/entry/redfish-emerges-as-an-interoperability-standard-for-sdi

The world’s data centers are working to adopt Software Defined Infrastructure (SDI) – but they are far from reaching their goals. The single biggest challenge in SDI is achieving interoperability between many kinds of hardware. Without that, a data center’s systems become a Tower of Babel, preventing IT system admins from seeing a unified view of all resources – and managing them.

Built to leverage virtualized infrastructure, SDI will be easier to achieve if there are more bridges between platforms – leading to better management. This blog focuses on an emerging management standard called Redfish, which is designed to help make SDI a day-to-day reality for hybrid cloud.

 

Seeking More Unified Management for Software-Defined Infrastructure (SDI) 

Redfish addresses an everyday reality: Most large organizations have “inherited infrastructure” based on years of successive IT decisions – and waves of systems deployments. Multi-vendor and mixed-vendor environments are the norm in enterprise data centers – but most customers would prefer to see more unified views of all devices under management. While many have installed software-defined storage and servers – most have not yet adopted software-defined networks.

That’s why we see Redfish APIs a practical step toward SDI – especially for enterprise customers with large heterogeneous, multi-vendor installations.

Redfish offers a standardized way to address scalable hardware from a wide variety of vendors. Just as important is its growing ecosystem, as it is adopted by a large and growing group of vendors. To keep this multi-vendor technology effort moving along, the Redfish APIs are being managed by the Distributed Management Task Force (DMTF) through its Scalable Platforms Management Forum (SPMF).

 

How the Technology Works           

Here’s how the technology  works:  Built on RESTful APIs, and leveraging JSON, Redfish is a secure, multi-node-capable replacement for IPMI-over-LAN links. It manages servers, storage, network interfaces, switching, software and firmware update services. This presents a wide range of data center devices that can be managed via the Redfish interfaces.

It’s important to note that standards efforts often fail if there is not enough buy-in by the vendors working to implement those standards. However, we’re finding that Redfish is drawing support from a broad array of hardware and software vendors.

 

What's New

A flurry of Redfish announcements came in August, 2016. Following that, there was, indeed, a long silent period. But in January, 2017,  a Host Interface Specification was added to the existing TCP/IP-based out-of-band Redfish standard. The new specification was expanded to allow applications and tools running on an Operating System to communicate with the Redfish management service.

If we were to take a snapshot of the development process, we would see that it has matured since 2015. Now, a parallel project called Swordfish, being developed by SNIA (Storage Networking Industry Association) members, is focused on storage management. Swordfish is designed make it easier to integrate scalable solutions into their hyperscale and cloud data centers. Because Swordfish is an extension of Redfish, it uses the same RESTful interfaces and JavaScript Object Notation (JSON) to seamlessly manage storage equipment and storage services, in addition to servers.

 

A Pragmatic Solution for Hybrid Clouds

In our view, the DMTF’s decision to support RESTful APIs is a pragmatic approach for customers, who won’t have to throw out familiar software tools in order to build unified views of all devices under management. For customers, the important thing is that Redfish can be used within enterprise data centers – and across hybrid clouds spanning multiple data centers and CSP public clouds.

It will fit with RESTful APIs and JSON, which are already widely adopted by data centers. Importantly, a growing group of hardware and software vendors already support Redfish. This group includes: American Megatrends, Broadcom, Cisco, Dell EMC, Ericsson AB, Fujitsu, Hewlett Packard Enterprise (HPE), Huawei, IBM, Insyde, Inspur, Intel, Lenovo, Mellanox, Microsemi, Microsoft, NetApp, Oracle, OSIsoft, Quanta, Supermicro, Vertiv, VMware and Western Digital.

Clearly, there is more work to be done, and more “pieces” to solve the interop puzzle need to be put in place. The fact that Redfish is being supported by many companies – and that some of them are direct competitors – is a good sign for wider adoption.

The reason for their cooperation: interoperability is table stakes for SDI.

 


Read More]]>
jean.bozman@hurwitz.com (Jean Bozman) Cloud Computing Tue, 14 Mar 2017 20:19:14 +0000
IBM Quantum Computing Jumps to Commercial Use Via Cloud http://hurwitz.com/blogs/judith-balancing-act/entry/ibm-quantum-computing-jumps-to-commercial-use-via-the-cloud http://hurwitz.com/blogs/judith-balancing-act/entry/ibm-quantum-computing-jumps-to-commercial-use-via-the-cloud

IBM’s quantum computing technology, developed over decades, is ready for commercialization. It is a fundamentally different approach to computing than is used in today’s systems – and, as such, represents a watershed in computing history.

By allowing scientists and researchers to model the complexities inherent in natural phenomena and financial markets, quantum computing is a new approach to the way in which computing itself is done. It is different than Big Data analytics, which finds patterns in vast amounts of data. Rather, it will generate new types of data characterizing phenomena that couldn’t be quantified before.

What began deep in the IBM research labs in New York and Zurich is now ready to provide computing services, via the IBM Bluemix cloud.

On March 6, 2017, IBM announced its initiative to build commercially available quantum computing systems.

  • The IBM Q quantum systems and services will be delivered via the IBM Cloud platform. The core computing will be done on “qubits,” which are the quantum computing units for programming. The qubits can be orchestrated to work together; up to now, five qubits have been available to early users, and more qubits will become available in 2017.
  • IBM is releasing a new API (application programming interface) for IBM quantum computing, which will allow developers to program in widely used languages, such as Python. The resulting code will be able to access the quantum computing resources, housed in the data center of IBM’s Yorktown Heights, N.Y., research laboratory.
  • IBM is also releasing an upgraded simulator that can model circuits with up to 20 qubits. Later this year, IBM plans to release a full software development kit (SDK) on the IBM Quantum Experience that will allow programmers and users to build simple quantum applications.

 

Quantum Computing in Brief

Quantum computing is designed to generate data based on the physics principles of uncertainty. Many natural phenomena, such as the structure of molecules and medicines, can be better understood by analyzing thousands, or millions, of possibilities, or possible outcomes.

But the sheer scale of the work extends beyond the reach of classical computing used in today’s data centers and Cloud Service Providers (CSPs).

Unlike IBM Watson, which focuses on Big Data and analytics, quantum technology seeks to bring insights based on what is “not” known, rather than finding patterns in known data. Examples include: learning more about chemical bonding and molecules; creating new cryptography algorithms, and advancing machine learning. This is done through an approach called “entanglements” that explore and orchestrate large numbers of potential outcomes – and moving the data results at through new types of high-speed communications links.

 

How It Works

Based on a technology that requires super-cooling at less than one degree Kelvin (a measure on the Kelvin temperature scale), IBM’s quantum computing marries five key elements: a new type of semiconductor processor built with silicon-based superconductors; on-chip nanotechnology; deep cooling containers that house the computer; programming with microwaves – and delivery via the cloud to end-users.  

The reason for the super-cooling is that quantum computing compares “quantum states” that are ever-changing in superconducting materials – making it impossible to pinpoint a given state as a computer “1” or a “0.” However, by leveraging extremely small gaps in electrical pulses traversing the super-cooled semiconductor surfaces, quantum computing finds the likelihood, or the probability, associated with multiple known “states” of the data – even though at any one moment the actual states of that data are in constant flux.

One quick example: It is impossible to find the exact position of all the electrons spinning inside specific molecules, preventing scientists from finding all the possible combinations of electrical bonds inside the molecule. This is important in medicine and pharmacology, where the quantum approach could extend the molecule-folding analysis that is widely used widely today in biotechnology. As a result, new medical treatments may emerge, and new approaches to drug development may be created. And, many more scenarios for research and exploration may open up with wider access to quantum computing capabilities.

 

A Bit of History

Only the core concept about quantum computing existed in 1981, when Nobel laureate Richard Feynman, the famed physics scientist, spoke at the Physics of Computation Conference, hosted by M.I.T. and IBM. (Feynman is best known for his physics work, and for discovering the design fault in the Space Shuttle’s O-Rings that caused the 1986 Challenger explosion.) During his 1981 presentation, Feynman challenged computer scientists to develop computers based on quantum physics.

By the late 1980s, this led to the creation of Josephson-Junction computers, which worked in a prototype super-cooled enclosure, but proved impractical to use in the enterprise data centers of that era. Some had considered its use in deep space – but even that proved to be “not cold enough” to achieve the quantum computing effects. But progress in quantum research continued in the 1990s and early 2000s, relating to programming code for quantum computers, deep cooling in physical data-center containers, and scaling up the quantum analysis.

Key developments in computer science itself have paved the way for the IBM quantum computing initiative. Stepping stones along the way included developing, and working with, the qubits, getting them to work together in “entanglements” to compare computing states, and improvements in coding quantum computers to interface with classical von Neumann computers based on “1s and 0s.”  

In quantum computing, the tiny superconducting Josephson Junction [electrical gaps], operating in extremely low temperature containers, find multiple possible outcomes in such fields as chemistry, astronomy and finance.

The advent of cloud allows quantum computing to take place in special environments, housed in super-cold, isolated, physical containers – while supporting end-user access from remote users worldwide. This model for accessibility changed the calculus for bringing quantum to the marketplace.

Now, the IBM Quantum Experience, as IBM is calling it, is more than an experiment, or a prototype. Rather, it is now discoverable as a new resource on the IBM Cloud, and is already being used by a select group of commercial customers, academic users, and IBM researchers.

 

Why Should Customers Care?

Certain classes of customers are likely to move into quantum computing analysis early on: Areas of interest would include finding new ways to model financial market data; discovery of new medicines and materials; optimizing supply chains and logistics, improving cloud security – and improving machine learning.

This next stage on the path to quantum computing will see collaborative projects involving programmer/developers, university researchers and computer scientists. IBM intends to build out an ecosystem around quantum computing. A number of researchers, including those at M.I.T., the University of Waterloo in Ontario, Canada, and the European Physical Society in Zurich, Switzerland, are already working with IBM on quantum computing. So far, 40,000 users have run more than 275,000 experiments on the IBM quantum computing resource. All are accessing IBM quantum computing – and IBM expects to expand the program throughout 2017.

In addition, there is an IBM Research Frontiers Institute, a consortium that looks at the business impact of new computing technologies. High-profile commercial companies that are founding members of the institute include Canon, Hitachi Metals, Honda, and Samsung – and IBM is asking other organizations to join as members.

 

Quantum Computing’s Future

The time is right for quantum computing – a new way to explore endless permutations of data about possible outcomes. It requires a different kind of technology – not the classical 1s and 0s of classical computing. It has taken decades to mature to the point where it is both accessible and programmable. It is still “early days” in quantum computing, but IBM’s moves to commercialize the technology are positive ones, now involving a wider group of partners in a new and evolving ecosystem around quantum computing.

 

 

 


Read More]]>
jean.bozman@hurwitz.com (Jean Bozman) Cloud Computing Mon, 06 Mar 2017 21:07:13 +0000
Cybersecurity: Three Paths to Better Data Integration http://hurwitz.com/blogs/judith-balancing-act/entry/cybersecurity-demands-fewer-data-silos-more-data-integration http://hurwitz.com/blogs/judith-balancing-act/entry/cybersecurity-demands-fewer-data-silos-more-data-integration

Data integration from multiple security point-products is a real problem for many enterprise customers. There are too many threat intelligence feeds – and no easy way to view all incoming data in a contextual way that facilitates interpretation, analysis, and remediation. Therefore, knocking down your organization’s in-house security software silos may become an essential element to improving your cybersecurity profile.

Cybersecurity relies on tools to identify an increasing number of threats. However, the proliferation of security point products slows down the ability of organizations to identify threats and respond. For many customers, there is too much input, from too many sources – and few ways to analyze all of it efficiently.

Here are three paths to achieve better data integration for security:

  • Frameworks. For customers, the challenge is how to achieve IT simplification to better guide their defenses against security threats. Some customers will look to software frameworks that can plug in, or integrate, data from multiple point-products. This approach works. SIEM (Security Incident/Event Management) is the most common consolidation point, but it is complex and it may require adopting standardized APIs, or agreeing to use a proprietary software framework from a single vendor.
  • Cloud services. Many customers are now looking to the cloud itself to allow customers to scale-up their security analytics, and to leverage Cloud Service Providers’ (CSP) security tools. Using that approach, CSPs gather security data, analyze it, and  flow remediation recommendation outwards to their rapidly growing customer bases.
  • Containers. A third approach is to containerize the applications and data sources. By using software-defined containers, the data is isolated, and the total “surface”area for attack is reduced. This approach was discussed at ContainerWorld 2017 in Santa Clara, CA, Feb. 21-22, 2017.

 

The bottom line: Integration of software inputs is essential to improving security in a highly networked IT environment. This data integration is critical to providing a unified view of threats facing businesses and organizations, so that they can be fully seen – and addressed by IT staff.

  

Reducing Security Silos; Integrating Data

At the RSA security conference in San Francisco, David Ulevitch, founder of OpenDNS and a vice president of Cisco’s networking group, made the argument for data integration clearly. Customers need to reduce the number of information silos carrying security data – and they need to integrate the results for a full, 360-degree view of the security threats facing their organization.

His conclusion: cloud services will provide an efficient way to deliver security data more quickly and efficiently. Otherwise, standards battles over APIs will bog down progress – even as the threat “surface” expands from 50 billion devices to hundreds of billions in the IoT world.

Many speakers, in their RSA talks and presentations, came to a similar conclusion. For example, Ret. Gen Keith Alexander, former director of the NSA, told the Cloud Security Alliance (CSA) meeting at RSA that small companies, lacking the resources of large companies, would find it hard to address the growing security threats without leveraging security cloud services from CSPs.

 

Cast the Net Wider

Here’s why building a unified view of all security inputs is essential for companies seeking to defend their security perimeter: Without it, customers would likely miss important signals of threat behavior, and would not see “patterns in the data” that would point to security vulnerabilities. Large companies can well afford to maintain large IT staff, and to host their own, customized, security dashboards. But mid-size and small companies are looking to framework software partners and cloud services partners to extend their security “net” to find security intrusions and hacking.

 

Next Steps for Security Vigilance

Acquiring all of this software for on-site monitoring would become quite expensive, especially for SMBs. Even the big companies, with their larger attack surface and deeper investments in legacy infrastructure, will need help in pulling together as many security-related inputs as possible.

The rapid growth of the security ecosystem demands that customers pay close attention – spending much time winnowing through the long list of software products and security-related cloud services. Now, they need to take the next step, by integrating the data from their portfolio of security point-products.

 

 

My colleague Chris Christiansen contributed to this blog document. For more details, see The Bozman Blog on www.hurwitz.com.

 

 


Read More]]>
jean.bozman@hurwitz.com (Jean Bozman) Security Tue, 28 Feb 2017 21:04:49 +0000