• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

23 March 2018

Announcing General Availability of Red Hat CloudForms 4.6

CloudForms and Ansible Integration

Red Hat CloudForms 4.6, as announced in the recent Press Release. One of the key highlights of the release is the introduction of Lenovo XClarity as the first physical infrastructure provider, enabling CloudForms to go beyond hybrid cloud management and manage hybrid infrastructure.

CloudForms 4.6 continues to build on the automation-centric approach to multi-cloud management that was introduced in 4.5, aligning with Red Hat’s vision to simplify IT management with Ansible’s powerful automation capabilities.

Additional enhancements focus on provider capabilities and usability. Let’s take a closer look at what’s new in CloudForms 4.6, and be sure to check back in on this blog for more detailed posts on many of these new capabilities in the coming weeks.

Red Hat Management Demos

New Lenovo XClarity Provider: enables CloudForms to discover and manage Lenovo physical compute infrastructure alongside virtual and multi-cloud through a single pane of glass.

Ansible Automation Inside:  
  • Call Ansible playbooks as methods in state machines, allowing for hybrid Ruby and Ansible orchestration.
  • Compute resource linking in services, providing visibility of Ansible deployed compute items.
  • Provide a foundational layer to curate Ansible modules, adding secure authentication for Ansible callbacks to CloudForms.
  • Support additional Ansible credentials, including OpenStack, Azure, Google, Satellite, Subversion, GitLab, as well as Ansible Networking.

Additional provider enhancements:  Red Hat OpenShift Container Platform, Red Hat OpenStack, Red Hat Virtualization

Usability enhancements for the Administrative User Interface:
  • Dynamic Resource Objects to quickly add the capability to provision and collect data on resources not supported by Red Hat CloudForms
  • Prometheus Alert Management
  • New service editor for easier service design
  • Create custom buttons in the Administrative Interface for frequent actions

Operations User Interface:
  • Enhanced snapshot management with more views for increased visibility
  • Improved user experience for resource details
  • Enhanced service dialog with validation of dialog fields as you type and more tool tips
  • Create custom buttons in the Operations User Interface for frequent actions
  • Additional Operations User Interface customization options to meet customer requirements for branding and access control

Red Hat CloudForms 4.6

Red Hat CloudForms 4.6 builds on the automation-centric foundation to multi-cloud management introduced in CloudForms 4.5, including increased support for automated infrastructure provisioning and scaling of Red Hat OpenShift Container Platform and Red Hat Openstack Platform deployments. CloudForms 4.6 is designed to make more Ansible capabilities available natively within CloudForms, including the ability for CloudForms to execute Ansible playbooks and visibility and linking into Ansible-deployed compute resources.

Integrate Openshift with Cloudforms

Red Hat CloudForms 4.6 also introduces Lenovo XClarity as the first physical infrastructure provider, enabling CloudForms to go beyond hybrid cloud management and manage hybrid infrastructure. The new Lenovo XClarity provider enables CloudForms to discover and manage physical compute infrastructure alongside virtual and multi-cloud through a single pane of glass. This view helps deliver valuable insight to system administrators to determine on-premise capacity and analyze the impacts of infrastructure modifications on workload and control infrastructure maintenance.

This video demonstrates how you can take manual tasks and processes and turn them into automation workflows. In this video we utilize Red Hat CloudForms and Ansible Tower to provide an underlying automation and orchestration framework to deliver automation to your IT organization.

Containers, OpenShift, Kubernetes all with Red Hat CloudForms

The demonstration shows how a user can order a service and have automation provision and deliver the resources while tracking the elements in a ticketing system (ServiceNow).

At a high level, the following areas are demonstrated:
  • Ordering an instance inside CloudForms self-service portal
  • CloudForms auto approval and quota escalation features
  • Ansible Tower’s powerful and intuitive workflows
  • Integration into third party web services (ServiceNow and Microsoft Azure)

This technical presentation details the integration points and technical value of all 4 Red Hat® Cloud Infrastructure components: Red Hat Enterprise Linux® OpenStack® Platform, Red Hat Enterprise Virtualization, Red Hat CloudForms, and Red Hat Satellite. This session will also illustrate several different deployment scenarios that this flexible offering allows. In addition, you'll learn about common integration …Full session details here: https://www.redhat.com/en/technologies/cloud-computing/cloud-infrastructure and http://itinfrastructure.report/view-resource.aspx?id=958. and here https://www.openstack.org/videos/

The definitive OpenStack Map

Presenting the OpenStack map, the process that went through its creation, and the next steps.

Automating CloudForms Appliance Deployment with Ansible

Red Hat CloudForms ships as an appliance to simplify deployment as much as possible – a Red Hat Enterprise Linux server with the appropriate software loaded, ready to be configured with a few basic configuration options.

Traditionally, these servers are configured using the command line tool appliance_console. This is a simple, menu-based interface that allows you to configure the core functionality of the appliance and makes it exceptionally easy to do so. Unfortunately, menu-based interfaces don’t lend themselves to being automated easily.

However, there is a solution!

Openstack Cloud Management and Automation Using Red Hat Cloudforms 4.0

All CloudForms appliances ship with another tool called appliance_console_cli. We can combine this tool with an Ansible playbook to automate the configuration of our appliance(s).

Before we go further, take a look at the sample playbook located on Github. This playbook shows a simple scenario that configures two appliances:

A primary database in which we use a separate disk and configure an internal VMDB
A non-VMDB appliance which joins the region in the primary database.
The playbook sets some standard configuration for all the appliances – namely a common root password, and an appropriate hostname – then uses the appliance_console_cli tool through the Ansible shell module.

Let’s take a look at some of the key options available to appliance_console_cli, as of CloudForms 4.5. This isn’t an exhaustive list, so have a look at the help output of the command to see them all:

Server configuration options

–host: set the hostname for the appliance. Also updates your /etc/hosts – handy!
–ipaserver, –ipaprincipal, –ipapassword, –ipadomain and –uninstall: establish this host in an IPA realm, using the principal and password you provide. Note the principal must have the privileges needed to register the host and register a service.
–logdisk, –tmpdisk: specify the devices used for the log and tmp directories.

Database options

–region: the region for the appliance; needed when establishing a database
–internal: specify this if you want to create an internal database (i.e. you’re not connecting to a remote postgresql db)
–hostname, –port, –username, –password, –dbname: key details for your database. Without the –internal parameter, these are used to join your appliance to an external database.
–dbdisk: specify a device to use for the postgresql data directory. Very handy!

Preparing the appliance

–fetch-key, –sshlogin, –sshpassword: fetch the v2_key encryption key from a remote appliance with the provided SSH login credentials. All appliances connected to a VMDB need the same v2_key!

CloudForms 4.6 extends the commands of appliance_console_cli and brings it closer to feature parity with appliance_console. A major improvement is the ability to configure database replication on the command line, just by running different parameters on your primary and standby nodes. Super useful! This will be the focus of a future article, and I’ll extend the playbook to deploy two VMDB appliances in a primary/standby configuration.

What are you waiting for? Head to Red Hat Customer Portal and try out the CloudForms 4.6 Beta! General Availability is just around the corner…

Ansible Automation

Don’t forget, the upcoming release of CloudForms 4.6 brings improved embedded Ansible Automation Inside capabilities. If you are not familiar, Embedded Ansible has been a feature of CloudForms since version 4.5 and allows to store and execute Ansible playbooks from within CloudForms.

For example, Ansible Automation allows to execute a playbook as part of a Service Catalog request to configure provisioned VMs for the requester. Alternatively, a playbook can be executed when a user interface button is pressed, or in response to an event or alert.

Automating the Enterprise with CloudForms & Ansible

Ansible Modules and CloudForms

Ansible 2.4 provides Ansible modules to manage CloudForms: manageiq_provider and manageiq_user. These modules use the CloudForms REST API to automate the configuration of providers and users.

Combining these configuration modules and the playbook above allow to provision and configure CloudForms appliances, define users in the VMDB, and configure new providers – all in a single play!


Ansible is being embedded throughout all cloud software platform at Red Hat, and CloudForms is no exception. Keep an eye out for future posts in this series, where we will test drive some of the new features of appliance_console_cli in the upcoming 4.6 release.

More Information:











21 February 2018

Cloudify 3.4 Brings Open Source Orchestration to Top Five Public Clouds with New Azure Support and Full Support for VMware, OpenStack Private Clouds

Cloudify 3.4 Brings Open Source Orchestration to Top Five Public Clouds

The latest version of Cloudify open source multi-cloud orchestration software—Cloudify 3.4—is now available. It brings pure-play cloud orchestration to every major public and private cloud platform—Amazon Web Services (AWS), Azure, Google Compute Platform (GCP), OpenStack and VMware—as well as cloud native technologies like Kubernetes and Docker. The software is at work across multiple industries and geographies, and it has become a preferred orchestrator for telecom providers deploying network functions virtualization (NFV).

***Cloudify 3.4 is now available for download here.***

Cloudify is the only pure-play, standards-based (TOSCA) cloud orchestration platform that supports every major private and public cloud infrastructure offering. With Cloudify, enterprises can use a single, open source cloud orchestration platform across OpenStack, VMware or AWS clouds, with virtualization approaches such as VMs or containers and with different automation toolsets like Puppet, Chef or Saltstack. Because it provides an easy-to-use, open source tool for management and orchestration (MANO) of multiple clouds, data centers and availability zones, Cloudify is attractive to telecoms, internet service providers, and enterprises using hybrid cloud.

Key Feature Summary

Enterprise-Grade Enhanced Hybrid Cloud Support - supports all major public and private cloud environments, including AWS, Azure, GCP, OpenStack and VMWare vSphere and vCloud
Support for Entire VMware Stack - the only open source orchestration platform supporting the entire VMware stack; all VMware plugins are open source and available in the Cloudify Community edition
Public Shared Images for both AWS and OpenStack - prebaked Cloudify Manager environments now available for AWS through a shared AMI, and OpenStack through a QCOW image; enables simple bootstrapping of a full-fledged Cloudify environment in minutes
Deployment Update - allows updating of application deployments, enabling application operations engineers and developers to introduce topology changes and include new resources to run TOSCA deployments
In-Place Manager Upgrade - the new Cloudify Manager upgrade process provides fully automated in-place upgrades for all manager infrastructure without any downtime to the managed services; in-place upgrade will allow easy migration between Cloudify versions and application of patched versions

Cloudify 3.4 Enhanced for Hybrid Cloud, Microservices

OpenStack Ottawa Meetup - March 29th 2017

The new release enhances Cloudify usability among enterprises looking for hybrid cloud orchestration without compromising on solutions that cater to the least common denominator of API abstraction. It does this by offering greater support of IaaS, enhanced usability, quicker installation and improved maintenance processes. Cloudify 3.4 introduces plugins for Microsoft Azure and GCP, complementing the existing portfolio of plugins for OpenStack, AWS and VMware vSphere and vCloud, which are now all open source. The new release also enhances support for container orchestration and container lifecycle management, including microservices modeling and enhanced support for Kubernetes.

The New Hybrid Stack with New Kubernetes Support

Cloudify 3.4 adds support for the Kubernetes container management project, enabling users to manage hybrid stacks that include both microservices on top of Kubernetes, alongside stateful services such as backends on bare-metal and VMs. It also manages composition and dependency management between services, as well as triggering of auto-scaling of both the micro-services and Kubernetes minions.

Continuous Deployment Across Clouds

Managing applications across hybrid environments and stacks goes far beyond infrastructure-layer orchestration. DevOps processes can be difficult to apply in hybrid cloud environments such as continuous deployment across clouds. Cloudify 3.4 comes with a new set of features that enables the pushing of updates to both the application and the infrastructure itself.

OpenStack Benefits for VMware

Cloudify for Telecom Operators

Cloudify 3.4 continues the open disruption in telecom and strengthens even further the offering for telecom service providers with its “Cloudify for NFV MANO (Management and Orchestration)” offering, which includes a robust set of new features, NFV-specific plugins, and blueprints showcasing modeling of VNFs (Virtual Network Functions) and SFC (Service Function Chaining) using TOSCA.

Media Resources

Cloudify 3.4 Has Landed - Learn More
Hybrid Cloud Blog Posts
Online Kubernetes Lab and Hybrid Cloud Module
Hybrid Cloud in Production Webinar
New Cloudify Telco Edition

About GigaSpaces

GigaSpaces Technologies provides software for cloud application orchestration and scaling of mission-critical applications on cloud environments. Hundreds of tier-one organizations worldwide are leveraging GigaSpaces technology to enhance IT efficiency and performance, including top financial firms, e-commerce companies, online gaming providers, healthcare organizations and telecom carriers. GigaSpaces has offices in the US, Europe and Asia. More at www.gigaspaces.com and getcloudify.org

Microsoft introduces Azure Stack, its answer to OpenStack

Microsoft has taken the wraps off Azure Stack, its take on hybrid cloud infrastructure and response to the popular OpenStack open-source cloud computing package. Azure Stack will begin shipping in September.

Azure Stack was originally designed as a software-only product, much like OpenStack. But Microsoft has decided to add integrated hardware turnkey solutions from its certified partners such as Dell EMC, HPE, Lenovo, Cisco and Huawei.

Microsoft first announced Azure Stack at the Ignite Conference in 2015 and formally introduced it at the Inspire conference in Washington, D.C.

Azure Stack is basically the same APIs, tools and processes that power Azure, but it’s intended to be hosted on-premises in private cloud scenarios. By offering the same platform and tools both on-premises and in Azure, the company promises consistency and ease of deployment, whether it’s hosted locally or in the cloud.

It also makes it possible to deploy different instances of the same app for meeting regulatory compliance, such as a financial app with different business or technical requirements, or perhaps regulatory limits on what can go into the cloud. But both apps can be based on the same codebase and one slightly altered for the cloud.

The Cloud On Your Terms - Azure PaaS Overview

“The ability to run consistent Azure services on-premises gets you full flexibility to decide where applications and workloads should reside,” said Mike Neil, corporate vice president for Azure Infrastructure and Management, in the blog post accompanying the announcement.

Azure Stack will use two pricing models: pay-as-you-use, similar to what you would get with the Azure service, and capacity-based, where customers will pay a fixed annual fee based on the number of physical cores in a system.

Omni v2.0 GCE, Azure integration and support for multiple regions

There will also be an option of having Azure Stack delivered and operated as a fully managed service. The services will be managed by data center operators such as Avanade, Daisy, Evry, Rackspace and Tieto. These companies are already delivering services around Azure.

Microsoft has said that its goal is to ensure that most ISV applications and services that are certified for Azure will work on Azure Stack. ISVs such as Bitnami, Docker, Kemp Technologies, Pivotal Cloud Foundry, Red Hat Enterprise Linux and SUSE Linux are working to make their solutions available on Azure Stack.

Microsoft also announced the Azure Stack Development Kit (ASDK), a free single-server deployment SDK for building and validating applications on the Azure Stack.

Throughout the Technical Previews, we’ve seen tremendous customer and partner excitement around Microsoft Azure Stack. In fact, we’re speaking with thousands of partners this week at our Microsoft Inspire event. Our partners are excited about the new business opportunities opened up by our ‘One Azure Ecosystem’ approach, which helps them extend their Azure investments to Azure Stack, to unlock new possibilities for hybrid cloud environments. In that vein, today we are announcing:

Orderable Azure Stack integrated systems: We have delivered Azure Stack software to our hardware partners, enabling us to begin the certification process for their integrated systems, with the first systems to begin shipping in September. You can now order integrated systems from Dell EMC, HPE, and Lenovo.
Azure Stack software pricing and availability: We have released pricing for the pay-as-you-use and capacity-based models today, you can use that information to plan your purchases.
Azure Stack Development Kit (ASDK) availability: ASDK, the free single-server deployment option for trial purposes, is available for web download today. You can use it to build and validate your applications for integrated systems deployments.

Azure Stack promise

Azure Stack is an extension of Azure, thereby enabling a truly consistent hybrid cloud platform. Consistency removes hybrid cloud complexity, which helps you maximize your investments across cloud and on-premises environments. Consistency enables you to build and deploy applications using the exact same approach – same APIs, same DevOps tools, same portal – leading to increased developer productivity. Consistency enables you to develop cloud applications faster by building on Azure Marketplace application components. Consistency enables you to confidently invest in people and processes knowing that those are fully transferable. The ability to run consistent Azure services on-premises gets you full flexibility to decide where applications and workloads should reside. An integrated systems-based delivery model ensures that you can focus on what matters to your business (i.e., your applications), while also enabling us to deliver Azure innovation to you faster.

In its initial release, Azure Stack includes a core set of Azure services, DevOps tooling, and Azure Marketplace content, all of which are delivered through an integrated systems approach. Check out this whitepaper for more information about what capabilities are available in Azure Stack at the initial release and what is planned for future versions.

Hybrid use cases unlock application innovation

Azure and Azure Stack unlock new use cases for customer facing and internal line of business applications:

Edge and disconnected solutions: You can address latency and connectivity requirements by processing data locally in Azure Stack and then aggregating in Azure for further analytics, with common application logic across both. We’re seeing lots of interest in this Edge scenario across different contexts, including factory floor, cruise ships, and mine shafts.
Cloud applications that meet varied regulations: You can develop and deploy applications in Azure, with full flexibility to deploy on-premises on Azure Stack to meet regulatory or policy requirements, with no code changes needed. Many customers are looking to deploy different instances of the same application – for example, a global audit or financial reporting app – to Azure or Azure Stack, based on business and technical requirements. While Azure meets most requirements, Azure Stack enables on-premises deployments in locations where it’s needed. Saxo Bank is a great example of an organization who plan to leverage the deployment flexibility enabled by Azure Stack.
Cloud application model on-premises: You can use Azure web and mobile services, containers, serverless, and microservice architectures to update and extend existing applications or build new ones. You can use consistent DevOps processes across Azure in the cloud and Azure Stack on-premises. We’re seeing broad interest in application modernization, including for core mission-critical applications. Mitsui Knowledge Industry is a great example of an organization planning their application modernization roadmap using Azure Stack and Azure.
Ecosystem solutions across Azure and Azure Stack

You can speed up your Azure Stack initiatives by leveraging the rich Azure ecosystem:

Our goal is to ensure that most ISV applications and services that are certified for Azure will work on Azure Stack. Multiple ISVs, including Bitnami, Docker, Kemp Technologies, Pivotal Cloud Foundry, Red Hat Enterprise Linux, and SUSE Linux, are working to make their solutions available on Azure Stack.
You have the option of having Azure Stack delivered and operated as a fully managed service. Multiple partners, including Avanade, Daisy, Evry, Rackspace, and Tieto, are working to deliver managed service offerings across Azure and Azure Stack. These partners have been delivering managed services for Azure via the Cloud Solution Provider (CSP) program and are now extending their offerings to include hybrid solutions.
Systems Integrators (SI) can help you accelerate your application modernization initiatives by bringing in-depth Azure skillsets, domain and industry knowledge, and process expertise (e.g., DevOps). PriceWaterhouseCoopers (PwC) is a great example of an SI that’s expanding their consulting practice to Azure and Azure Stack.
Orderable integrated systems, free single-server kit for trial
Azure Stack has two deployment options:

Azure Stack integrated systems – These are multi-server systems meant for production use, and are designed to get you up and running quickly. Depending upon your hardware preferences, you can choose integrated systems from Dell EMC, HPE, and Lenovo (with Cisco and Huawei following later). You can now explore these certified hardware solutions and order integrated systems by contacting our hardware partners. These systems come ready to run and offer consistent, end-to-end customer support no matter who you call. They will initially be available in 46 countries covering key markets across the world.
Azure Stack Development Kit (ASDK) – ASDK is a free single server deployment that’s designed for trial and proof of concept purposes. ASDK is available for web download today, and you can use it to prototype your applications. The portal, Azure services, DevOps tools, and Marketplace content are the same across this ASDK release and integrated systems, so applications built against the ASDK will work when deployed to a multi-server system.
Closing thoughts
As an extension of Azure, Azure Stack will deliver continuous innovation with frequent updates following the initial release. These updates will help us deliver enriched hybrid application use cases, as well as grow the infrastructure footprint of Azure Stack. We will also continue to broaden the Azure ecosystem to enable additional choice and flexibility for you.

Cloud Orchestration with Azure and OpenStack – The Less Explored Hybrid Cloud

Often times, when hybrid cloud is discussed, the natural choices for such a discussion usually center around OpenStack coupled with VMware, or AWS coupled with OpenStack, and even diverse clouds and container options – but Azure coupled with OpenStack is a much less common discussion.

OpenStack Summit Vancouver 2018

This is actually quite an anomaly when you think about it as both Azure’s public cloud, and OpenStack’s private cloud are highly enterprise-targeted.  With Azure boasting enterprise-grade security and encryption, and even offering their newly announced Azure Stack aimed at helping enterprises bridge the gap between their data centers and the cloud, and OpenStack’s inherent openness of APIs enabling enterprises to build their own cloud, these should naturally fit together in the cloud landscape.  Yet this combination is surprisingly often overlooked.

Free, open source, hybrid cloud orchestration – need I say more?  Get Cloudify

Nati Shalom, recently discussed in his post Achieving Hybrid Cloud Without Compromising On The Least Common Denominator, a survey that demonstrates that enterprises these days are often leveraging as many as six clouds simultaneously, and the list just keeps on growing with new technologies sprouting up by the minute.

That’s why solutions like the Azure Stack, that are also geared towards multi-cloud scenerios in the context of app migration to the cloud from traditional data centers, especially while taking all of the enterprise-grade considerations involved in such a transition into account, are critical.

Project Henosis Unified management of VMs and Container based infrastructure for OpenStack

Historically, in order to achieve cloud portability you would need to cater to the least common denominator by abstracting your application from all of the underlying logic of the infrastructure below, but this type of model comes at a costly price.  All of the actual advantages the specific cloud provides.  What if there were a better approach?  A way to achieve interoperability, and extensibility between clouds, all while taking full advantage of the underlying clouds capabilities and service portfolio.

Highlights of OpenStack Mitaka and the OpenStack Summit

But even so, many solutions these days don’t always provide the extensibility and interoperability enterprises these days need for future-proofing, application deployment portability among other popular use cases across clouds.  Hybrid cloud itself has also has proven that it isn’t immune to future proofing with disruptive technologies arising every day – not unlike, the latest and greatest containers (read more on The Disruption Cycle).  This means that the new approach needs to actually be built for hybrid stacks, not just clouds, all while providing the full functionality the underlying infrastructure provides.

Enter TOSCA (the standard by the Oasis Foundation for cloud applications).  TOSCA was written for this exact scenario, and provides inherent cloud interoperability and agnosticism.  The TOSCA approach is intended to standardize the way applications are meant to be orchestrated in cloud environments.  And enterprises love standards.  Building one syntax and vocabulary enables organizations to adapt to the fast-paced world of cloud in a substantially simplified manner.

Security for Cloud Computing: 10 Steps to Ensure Success V3.0

Cloudify, based on TOSCA, built from the ground up as an integration platform is leveraging standardized templating, workflows, and cloud plugins to provide a single pane of glass across technologies that wouldn’t natively or intuitively plugin to each other, such as OpenStack and Azure, and even Kubernetes or Docker, and non-virtualized environments like traditional data centers.  Cloudify is making it possible to choose a technology that adapts to the way your organization works or would like to work, and not require you to adapt your technologies, stacks or practices to the technologies you adopt.

Templating languages, such as TOSCA, enable far greater flexibility for abstraction than API abstraction providing the level of extensibility and customization that enterprises require, without the need to develop or change the underlying implementation code, this is why major projects such as ARIA, Tacker, and OpenStack Heat are building solutions based on this standard.

In this way, Azure users now have a set of building blocks for managing the entire application stack and its lifecycle, across clouds, stacks and technologies. And with Microsoft now proudly having the most open source developers on GitHub, yup – ahead of Facebook, Angular, and even Google & Docker amazingly – Azure is uniquely positioned to achieve this level of openness and interoperability.

Montreal Linux MeetUp - OpenStack Overview (2017.10.03)

This will also ultimately provide a higher degree of flexibility that allows users to define their own level of abstraction per use case or application.  In this manner, cloud portability is achievable without the need to change the underlying code, enabling true hybrid cloud.

China's largest OpenStack Cloud accelerates the science discovery of AMS-02

More Information:











23 January 2018

Micro-segmentation Defined – NSX Securing "Anywhere"

Why It’s Time to Build a Zero Trust Network

Network security, for a long time, has worked off of the old Russian maxim, “trust but verify.” Trust a user, but verify it’s them. However, today’s network landscape — where the Internet of Things, the Cloud, and more are introducing new vulnerabilities — makes the “verify” part of “trust but verify” difficult and inefficient. We need a simpler security model. That model: Zero Trust.

The Next Generation Network model
VMware NSX and Micro-Segmentation

Forrester Research coined the term “Zero Trust” to describe a model that prevents common and advanced persistent threats from traversing laterally inside a network. This can be done through a strict, micro-granular security model that ties security to individual workloads and automatically provisions policies. It’s a network that doesn’t trust any data packets. Everything is untrusted. Hence: Zero Trust.

So how can you deploy the Zero Trust model? Should you? To answer these questions and more, we’ve gathered John Kindervag, VP and Principal Analyst at Forrester Research, and our own VMware NSX experts to discuss Zero Trust, micro-segmentation and how VMware NSX makes it all happen our webinar, “Enhancing Security with Zero Trust, The Software-Defined Data Center, and Micro-segmentation.” Best of all: you can watch it on-demand, on your own.

VMware NSX Security and Micro-segmentation

VMware NSX is the network virtualization platform for the Software-Defined Data Center. NSX brings the operational model of virtual machines to your data center network. This allows your organization to overcome the hardware-defined economic and operational hurdles keeping you from adopting a Zero Trust model and better overall security.

To learn more about how VMware NSX can help you be twice as secure at half the cost, visit the NSX homepage and follow @VMwareNSX on Twitter for the latest in micro-segmentation news.

The landscape of the modern data center is rapidly evolving. The migration from physical to virtualized workloads, move towards software-defined data centers, advent of a multi-cloud landscape, proliferation of mobile devices accessing the corporate data center, and adoption of new architectural and deployment models such as microservices and containers has assured the only constant in modern data center evolution is the quest for higher levels of agility and service efficiency. This march forward is not without peril as security often ends up being an afterthought. The operational dexterity achieved through the ability to rapidly deploy new applications overtakes the ability of traditional networking and security controls to maintain an acceptable security posture for those application workloads. That is in addition to a fundamental problem of traditionally structured security not working adequately in more conventional and static data centers.

VMware NSX for vSphere - Intro and use cases

Without a flexible approach to risk management, which adapts to the onset of new technology paradigms, security silos using disparate approaches are created. These silos act as control islands, making it difficult to apply risk-focused predictability into your corporate security posture, causing unforeseen risks to be realized. These actualized risks cause an organization’s attack surface to grow as the adoption of new compute technology increases, causing susceptibility to increasing advanced threat actors.

A foundational aspect of solving this problem is the ability to implement micro-segmentation anywhere. NSX is a networking and security platform able to deliver micro-segmentation across all the evolving components comprising the modern datacenter. NSX based micro-segmentation enables you to increase the agility and efficiency of your data center while maintaining an acceptable security posture. The following blog series will define the necessary characteristics of micro-segmentation as needed to provide effective security controls within the modern data center and demonstrate how NSX goes beyond the automation of legacy security paradigms in enabling security through micro-segmentation.


It is no longer acceptable to utilize the traditional approach to data-center network security built around a very strong perimeter defense but virtually no protection inside the perimeter. This model offers very little protection against the most common and costly attacks occurring against organizations today, which include attack vectors originating within the perimeter. These attacks infiltrate your perimeter, learn your internal infrastructure, and laterally spread through your data center.

Architecting-in Security with Micro-Segmentation

The ideal solution to complete datacenter protection is to protect every traffic flow inside the data center with a firewall and only allow the flows required for applications to function.  This is also known as the Zero Trust model.  Achieving this level of protection and granularity with a traditional firewall is operationally unfeasible and cost prohibitive, as it would require traffic to be hair-pinned to a central firewall and virtual machines to be placed on individual VLANs (also known as pools of security).

Wade Holmes - Tackling Security Concerns with Micro segmentation

A typical 1 Rack-Unit top-of-rack data center switch performs at approximately 2Tbps while the most advanced physical firewall performs at 200Gbps in 19 Rack-Unit physical appliances, providing 10% the usable bandwidth. Imagine the network resource utilization bottlenecks created by having to send all east-to-west communication from every VM to every other VM through a physical firewall and how quickly you would run out of available VLANs (limited to 4096) to segment workloads into application-centric pools of security. This is a fundamental architectural constraint created by traditional security architecture that hampers the ability to maintain an adequate security posture within a modern datacenter.


Micro-segmentation decreases the level of risk and increases the security posture of the modern data center. So what exactly defines micro-segmentation? For a solution to provide micro-segmentation requires a combination of the following capabilities, enabling the ability to achieve the below-noted outcomes.

VMware NSX 101: What, Why & How

Distributed stateful firewalling for topology agnostic segmentation – Reducing the attack surface within the data center perimeter through distributed stateful firewalling and ALGs (Application Level Gateway) on a per-workload granularity regardless of the underlying L2 network topology (i.e. possible on either logical network overlays or underlying VLANs).

VMware NSX Component Overview w Tim Davis @aldtd #vBrownBag #RunNSX

Centralized ubiquitous policy control of distributed services – Enabling the ability to programmatically create and provision security policy through a RESTful API or integrated cloud management platform (CMP).

Granular unit-level controls implemented by high-level policy objects – Enabling the ability to utilize security groups for object-based policy application, creating granular application level controls not dependent on network constructs (i.e. security groups can use dynamic constructs such as OS type, VM name or static constructs such active directory groups, logical switches, VMs, port groups IPsets, etc.). Each applcation can now have its own security perimeter without relying on VLANs . See the DFW Policy Rules Whitepaper for more information.

Andy Kennedy - Scottish VMUG April 2016

Network overlay based isolation and segmentation – Logical Network overlay-based isolation and segmentation that can span across racks or data centers regardless of the underlying network hardware, enabling centrally managed multi-datacenter security policy with up to 16 million overlay-based segments per fabric.

Policy-driven unit-level service insertion and traffic steering – Enabling Integration with 3rd party solutions for advanced IDS/IPS and guest introspection capabilities.


National Institute of Standards and Technology (NIST) is the US federal technology agency that works with industry to develop and apply technology, measurements, and standards. NIST is working with standards bodies globally in driving forward the creation of international cybersecurity standards. NIST recently published NIST Special Publication 800-125B, “Secure Virtual Network Configuration for Virtual Machine (VM) Protection” to provide recommendations for securing virtualized workloads.

VMware NSX Switching and Routing with Tim Davis @aldtd #vBrownBag #RunNSX

The capabilities of micro-segmentation provided by NSX map directly to the recommendations made by NIST.

Section 4.4 of NIST 800-125b makes four recommendations for protecting virtual machine workloads within modern data center architecture. These recommendations are as follows

VM-FW-R1: In virtualized environments with VMs running delay-sensitive applications, virtual firewalls should be deployed for traffic flow control instead of physical firewalls, because in the latter case, there is latency involved in routing the virtual network traffic outside the virtualized host and back into the virtual network.

VM-FW-R2: In virtualized environments with VMs running I/O intensive applications, kernel-based virtual firewalls should be deployed instead of subnet-level virtual firewalls, since kernel-based virtual firewalls perform packet processing in the kernel of the hypervisor at native hardware speeds.

VM-FW-R3: For both subnet-level and kernel-based virtual firewalls, it is preferable if the firewall is integrated with a virtualization management platform rather than being accessible only through a standalone console. The former will enable easier provisioning of uniform firewall rules to multiple firewall instances, thus reducing the chances of configuration errors.

VM-FW-R4: For both subnet-level and kernel-based virtual firewalls, it is preferable that the firewall supports rules using higher-level components or abstractions (e.g., security group) in addition to the basic 5-tuple (source/destination IP address, source/destination ports, protocol).

VMworld 2015: Introducing Application Self service with Networking and Security

NSX based micro-segmentation meets the NIST VM-FW-R1, VM-FW-R2 and VM-FW-R3 recommendations in providing the ability to utilize network virtualization based overlays for isolation, and distributed kernel based firewalling for segmentation through ubiquitous centrally managed policy control which can be fully API driven.segmetnation with overlay

VMware NSX - Transforming Security

Micro-segmentation through NSX also meets the NIST VM-FW-R4 recommendation to utilize higher-level components or unit-of-trustabstractions (e.g., security groups) in addition to the basic 5-tuple (source/destination IP address, source/destination ports, protocol) for firewalling. NSX based micro-segmentation can be defined as granularly as a single application or as broad as a data center, with controls that can be implemented by attributes such as who you are or what device is accessing your data center.


Protection against advanced persistent threats that propagate via targeted users and application vulnerabilities presents a requirement for more than network layer segmentation to maintain an adequate security posture.
These advanced threats require application-level security controls such as application-level intrusion protection or advanced malware protection to protect chosen workloads.  In being a security platform, NSX based micro-segmentation goes beyond the recommendations noted in the NIST publication and enables the ability for fine-grained application of service insertion (e.g. allowing IPS services to be applied to flows between assets that are part of a PCI zone). In a traditional network environment, traffic steering is an all or none proposition, requiring all traffic to steered through additional devices.  With micro-segmentation, advanced services are granularly applied where they are most effective, as close to the application as possible in a distributed manner while residing in separate trust zone outside the application’s attack surface.

Kubernetes and NSX


While new workload provisioning is dominated by agile compute technologies such as virtualization and  physicalcloud, the security posture of physical workloads still has to be maintained. NSX has the security of physical workloads covered as physical to virtual or virtual to physical communication can be enforced using distributed firewall rules at ingress or egress. In addition, for physical to physical communication NSX can tie automated security of physical workloads into micro-segmentation through centralized policy control of those physical workloads through the NSX Edge Service Gateway or integration with physical firewall appliances. This allows centralized policy management of your static physical environment in addition to your micro-segmented virtualized environment.


NSX is the means to provide micro-segmentation through centralized policy controls, distributed stateful firewalling, overlay- based isolation, and service-chaining of partner services to address the security needs of the rapidly evolving information technology landscape. NSX easily meets and goes above and beyond the recommendations made by the National Institute of Standards and Technology for protecting virtualized workloads, secures physical workloads, and paves a path towards securing future workloads with a platform that meets your security needs today and is flexible enough to adapt to your needs tomorrow.

Use a Zero Trust Approach to Protect Against WannaCry

Micro-segmentation with VMware NSX compartmentalizes the data center to contain the lateral spread of ransomware attacks such as WannaCry

On May 12 2017, reports began to appear of the WannaCry malware attacking organizations worldwide in one of the largest ransomware cyber incidents to date. The European Union Agency for Law Enforcement Cooperation (Europol) has reported more than 200,000 attacks in over 150 countries and in 27, with the full scope of the attack yet to be determined.  Victims include organizations from all verticals.

WannaCry targets Microsoft Windows machines, seizing control of computer systems through a critical vulnerability in Windows SMB. It also utilizes RDP as an attack vector for propagation. It encrypts seized systems and demands a ransom be paid before decrypting the system and giving back control. The threat propagates laterally to other systems on the network via SMB or RDP and then repeats the process. An initial analysis of WannaCry by the US Computer Emergency Readiness Team (US-CERT) can be found here, with a detailed analysis from Malware Bytes here.

One foundational aspect of increasing cybersecurity hygiene in an organization to help mitigate such attacks from proliferating is enabling a least privilege (zero trust) model by embedding security directly into the data center network. The core concept of zero trust is to only allow for necessary communication between systems using a stateful firewall, assuming all network traffic is untrusted. This dramatically reduces the attack surface area.

VMware NSX micro-segmentation provides this intrinsic level of security to effectively compartmentalize the data center to contain the lateral spread of ransomware attacks such as WannaCry.

In this blog, focus is on how NSX can help:
  • Contain the spread of the malware such as WannaCry
  • Provide visibility into on-going attacks
  • Identify systems that are still infected
  • Mitigate future risk through a micro-segmentation approach

Stages of the WannaCry cyber attack

Before we provide our attack mitigation recommendations, let us review the WannaCry ransomware attack lifecycle.

WannaCry uses the EternalBlue exploit that was leaked from the NSA to exploit the MS17-010 vulnerability in Windows. WannaCry then encrypts data on the system including office files, emails, databases, and source code, as well as network shares, using RSA-2048 encryption keys with AES-128 encryption that are extremely difficult to break with current technology. WannaCry ends the “weaponization” stage by posting a message to the user demanding $300 in bitcoin as a ransom in order to decrypt the data.

Installation / Exploitation / Encryption / Command and Control:
WannaCry cycles through every open RDP session since it is also a worm that contains the malware payload that drops itself onto systems and spreads itself. As soon as the ransomware is dropped, it tries to connect to a command and control URL to seize control and encrypt the system. The code has both direct as well a proxy access to the internet. Next step for the worm is to install a service called “mssecsvc2.0” with display name “Microsoft Security Center (2.0) service”. The worm loads the crypto module when the service is installed and proceeds to encrypt the system.

WannaCry enters through email phishing or other means of breaching the network perimeter and scans all of the systems on the network based and spreads laterally from vulnerable system-to-system. Scans are not just restricted to systems actively communicating but also IP addresses obtained via multicast traffic, unicast traffic, and DNS traffic. Once WannaCry obtains a list of IPs to target, it probes port 445 with a randomly generated spoofed source IP address. If the connection on port 445 of a vulnerable system is successful, WannaCry proceeds to infect and encrypt the system. Additionally, it scans for the entire /24 subnet for the system (10 IP addresses at a time), probing for additional vulnerable systems.

Preventing the attack with VMware NSX

NSX can be used to implement micro-segmentation to compartmentalize the data center, containing the lateral spread of ransomware attacks such as WannaCry and achieving a zero trust network security model.

The following are recommendations in order of priority, to create a micro-segmented environment that can interrupt the WannaCry attack lifecycle.

Monitor traffic on port 445 with the NSX distributed firewall. This would provide visibility into SMB traffic, that may include attack traffic or attempts. Once endpoint infection is determined, Allow or Block, logs from NSX can be correlated or analyzed in SIEM, log analyzer or network behavior analyzer.
Enable environmental re-direction rules in NSX so that any traffic destined for critical systems is steered to an NSX-integrated IPS solutions to detect network indicators of this attack. Even if the perimeter did not detect the malware, east-west traffic within the environment can be analyzed to detect the attack indicators.
Create an NSX Security Group for all VMs running the Windows Operating System, to identify potentially vulnerable machines. This is really simple to do in NSX as you can group VMs based on attributes like operating system, regardless of their IP address.
Enable Endpoint Monitoring (NSX 6.3+ feature) on VMs that are part of the Windows operating system to detect mssecsvc2.0. If detected, verify and check what VMs it has started communicating with on port 445.
Create a distributed firewall rule to immediately block/monitor all traffic with a destination port of 445 on the /24 subnet of any VMs that is found on that list.
Use Endpoint Monitoring to detect if mssecssvc2.0 is running on systems that are not patched so that NSX can detect if a new attack starts.
Additional precautions include blocking RDP communication between systems and blocking all desktop-to-desktop communications in VDI environments. With NSX, this level of enforcement can be achieved with a single rule.

Architecting a secure datacenter using NSX Micro-segmentation

With NSX micro-segmentation, organizations can enable a least privilege, zero trust model in their environment. For environments utilizing NSX, the distributed firewall applies security controls to every vNIC of every VM. This controls communications between all VMs in the environment (even if they are on the same subnet), unlike the traditional firewall model in which flows within a subnet are typically not restricted, allowing malware to spread laterally with ease.

With a zero trust architecture enabled by NSX, any non-approved flow will be discarded by default, regardless of what services have been enabled the VM, and ransomware like WannaCry will not be able to propagate – immediately blunting the amount of damage to data center operations and hence the organization.

More Information:













VMware NSX vSphere Zero-Trust Security Demo

19 December 2017

IBM Big Data Platform

What is big data?

Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is big data.

Enterprise Data Warehouse Optimization: 7 Keys to Success

Big data spans three dimensions: Volume, Velocity and Variety.

Volume: Enterprises are awash with ever-growing data of all types, easily amassing terabytes—even petabytes—of information.

Turn 12 terabytes of Tweets created each day into improved product sentiment analysis
Convert 350 billion annual meter readings to better predict power consumption

Velocity: Sometimes 2 minutes is too late. For time-sensitive processes such as catching fraud, big data must be used as it streams into your enterprise in order to maximize its value.

Scrutinize 5 million trade events created each day to identify potential fraud
Analyze 500 million daily call detail records in real-time to predict customer churn faster

Variety: Big data is any type of data - structured and unstructured data such as text, sensor data, audio, video, click streams, log files and more. New insights are found when analyzing these data types together.

Overview - IBM Big Data Platform

Monitor 100’s of live video feeds from surveillance cameras to target points of interest
Exploit the 80% data growth in images, video and documents to improve customer satisfaction

Big data is more than simply a matter of size; it is an opportunity to find insights in new and emerging types of data and content, to make your business more agile, and to answer questions that were previously considered beyond your reach. Until now, there was no practical way to harvest this opportunity. Today, IBM’s platform for big data uses state of the art technologies including patented advanced analytics to open the door to a world of possibilities.

IBM big data platform

Data Science Experience: Build SQL queries with Apache Spark

Do you have a big data strategy? IBM does. We’d like to share our know-how with you to help your enterprise solve its big data challenges.

IBM is unique in having developed an enterprise class big data platform that allows you to address the full spectrum of big data business challenges.

The platform blends traditional technologies that are well suited for structured, repeatable tasks together with complementary new technologies that address speed and flexibility and are ideal for adhoc data exploration, discovery and unstructured analysis.
IBM’s integrated big data platform has four core capabilities: Hadoop-based analytics, stream computing, data warehousing, and information integration and governance.

Fig. 1 - IBM big data platform

The core capabilities are:

Hadoop-based analytics: Processes and analyzes any data type across commodity server clusters.
Stream Computing: Drives continuous analysis of massive volumes of streaming data with sub-millisecond response times.
Data Warehousing: Delivers deep operational insight with advanced in-database analytics.
Information Integration and Governance: Allows you to understand, cleanse, transform, govern and deliver trusted information to your critical business initiatives.

Delight Clients with Data Science on the IBM Integrated Analytics System

Supporting Platform Services:

Visualization & Discovery: Helps end users explore large, complex data sets.
Application Development: Streamlines the process of developing big data applications.
Systems Management: Monitors and manages big data systems for secure and optimized performance.
Accelerators: Speeds time to value with analytical and industry-specific modules.

IBM DB2 analytics accelerator on IBM integrated analytics system technical overview

How Big Data and Predictive Analytics are revolutionizing AML and Financial Crime Detection

Big data in action

What types of business problems can a big data platform help you address? There are multiple uses for big data in every industry – from analyzing larger volumes of data than was previously possible to drive more precise answers, to analyzing data in motion to capture opportunities that were previously lost. A big data platform will enable your organization to tackle complex problems that previously could not be solved.

Big data = Big Return on Investment (ROI)

While there is a lot of buzz about big data in the market, it isn’t hype. Plenty of customers are seeing tangible ROI using IBM solutions to address their big data challenges:

Healthcare: 20% decrease in patient mortality by analyzing streaming patient data
Telco: 92% decrease in processing time by analyzing networking and call data
Utilities: 99% improved accuracy in placing power generation resources by analyzing 2.8 petabytes of untapped data

IBM’s big data platform is helping enterprises across all industries. IBM understands the business challenges and dynamics of your industry and we can help you make the most of all your information.

The Analytic Platform behind IBM’s Watson Data Platform - Big Data

When companies can analyze ALL of their available data, rather than a subset, they gain a powerful advantage over their competition. IBM has the technology and the expertise to apply big data solutions in a way that addresses your specific business problems and delivers rapid return on investment.

The data stored in the cloud environment is organized into repositories. These repositories may be hosted on different data platforms (such as a database server, Hadoop, or a NoSQL data platform) that are tuned to support the types of analytics workload that is accessing the data.

What’s new in predictive analytics: IBM SPSS and IBM decision optimization

The data that is stored in the repositories may come from legacy, new, and streaming sources, enterprise applications, enterprise data, cleansed and reference data, as well as output from streaming analytics.

Breaching the 100TB Mark with SQL Over Hadoop

Types of data repositories include:

  • Catalog: Results from discovery and IT data curation create a consolidated view of information that is reflected in a catalog. The introduction of big data increases the need for catalogs that describe what data is stored, its classification, ownership, and related information governance definitions. From this catalog, you can control the usage of the data.
  • Data virtualization:Agile approach to data management that allows an application to retrieve and manipulate data without requiring technical details about the data
  • Landing, exploration, and archive: Allows for large datasets to be stored, explored, and augmented using a wide variety of tools since massive and unstructured datasets may mean that it is no longer feasible to design the data set before entering any data. Data may be used for archival purposes with improved availability and resiliency thanks to multiple copies distributed across commodity storage.

SparkR Best Practices for R Data Scientists
  • Deep analytics and modeling: The application of statistical models to yield information from large data sets comprised of both unstructured and semi-structured elements. Deep analysis involves precisely targeted and complex queries with results measured in petabytes and exabytes. Requirements for real-time or near-real-time responses are becoming more common.
  • Interactive analysis and reporting: Tools to answer business and operations questions over Internet-scale data sets. Tools also use popular spreadsheet interfaces for self-service data access and visualization. APIs implemented by data repositories allow output to be efficiently consumed by applications.
  • Data warehousing: Populates relational databases that are designed for building a correlated view of business operation. A data warehouse usually contains historical and summary data derived from transaction data but can also integrate data from other sources. Warehouses typically store subject-oriented, non-volatile, time-series data used for corporate decision-making. Workloads are query intensive, accessing millions of records to facilitate scans, joins, and aggregations. Query throughput and response times are generally a priority.

IBM Power leading Cognitive Systems

IBM offers a wide variety of offerings for consideration in building data repositories:
  • InfoSphere Information Governance Catalog maintains a repository to support the catalog of the data lake. This repository can be accessed through APIs and can be used to understand and analyze the types of data stored in the other data repositories.
  • IBM InfoSphere Federation Server creates consolidated information views of your data to support key business processes and decisions.
  • IBM BigInsights for Apache Hadoop delivers key capabilities to accelerate the time to value for a data science team, which includes business analysts, data architects, and data scientists.
  • IBM PureData™ System for Analytics, powered by Netezza technology, is changing the game for data warehouse appliances by unlocking data's true potential. The new IBM PureData System for Analytics is an integral part of a logical data warehouse.
  • IBM Analytics for Apache Spark is a fully-managed Spark service that can help simplify advanced analytics and speed development.
  • IBM BLU Acceleration® is a revolutionary, simple-to-use, in-memory technology that is designed for high-performance analytics and data-intensive reporting.
  • IBM PureData System for Operational Analytics is an expert integrated data system optimized specifically for the demands of an operational analytics workload. A complete solution for operational analytics, the system provides both the simplicity of an appliance and the flexibility of a custom solution.

IBM Big Data Analytics Concepts and Use Cases

Bluemix offers a wide variety of services for data repositories:

  • BigInsights for Apache Hadoop provisions enterprise-scale, multi-node big data clusters on the IBM SoftLayer cloud. Once provisioned, these clusters can be managed and accessed from this same service.

Big Data: Introducing BigInsights, IBM's Hadoop- and Spark-based analytical platform
  • Cloudant® NoSQL Database is a NoSQL Database as a Service (DBaaS). It's built from the ground up to scale globally, run non-stop, and handle a wide variety of data types like JSON, full-text, and geospatial. Cloudant NoSQL DB is an operational data store optimized to handle concurrent reads and writes and provide high availability and data durability.
  • dashDB™ stores relational data, including special types such as geospatial data. Then analyze that data with SQL or advanced built-in analytics like predictive analytics and data mining, analytics with R, and geospatial analytics. You can leverage the in-memory database technology to use both columnar and row-based tables. The dashDB web console handles common data management tasks, such as loading data, and analytics tasks like running queries and R scripts.

IBM BigInsights: Smart Analytics for Big Data

IBM product support for big data and analytics solutions in the cloud

Now that we've reviewed the component model for a big data and analytics solution in the cloud, let's look at how IBM products can be used to implement a big data and analytics solution. In previous sections, we highlighted IBM's end-to-end solution for deploying a big data and analytics solution in cloud.
The figure below shows how IBM products map to specific components in the reference architecture.

Figure 5. IBM product mapping

Ml, AI and IBM Watson - 101 for Business

IBM product support for data lakes using cloud architecture capabilities

The following images show how IBM products can be used to implement a data lake solution. In previous sections, we highlighted IBM's end-to-end solution for deploying data lake solutions using cloud computing.

Benefits of Transferring Real-Time Data to Hadoop at Scale

Mapping on-premises and SoftLayer products to specific capabilities

Figure 7 shows how IBM products can be used to run a data lake in the cloud.

Figure 7. IBM product mapping for a data lake using cloud computing

What is Big Data University?

Big Data Scotland 2017

Big Data Scotland is an annual data analytics conference held in Scotland. Run by DIGIT in association with The Data Lab, it is free for delegates to attend. The conference is geared towards senior technologists and business leaders and aims to provide a unique forum for knowledge exchange, discussion and cross-pollination.

The programme will explore the evolution of data analytics; looking at key tools and techniques and how these can be applied to deliver practical insight and value. Presentations will span a wide array of topics from Data Wrangling and Visualisation to AI, Chatbots and Industry 4.0.


More Information: