Does your IT shop use a combination wrench?

More and more, IT shops seem inclined to consolidate and simplify their infrastructure to one platform. A mindset that all workloads can or should run on a single platform incorporated into ‘Software-defined this’ and ‘Software-defined that’.  It tantalizes the decision makers senses as vendors claim to reduce complexity and cost.

Technology has become Ford vs Chevy or John Deere vs Case International.  Whereas these four vendors each have some unique capabilities and offerings they are also leaders in innovation and reliability.  For IT shops, there is this perception that only Intel & VMware are viable infrastructure options to deploy every workload type.  Mission / Life critical workloads in healthcare, high-frequency financial transactions, HPC, Big Data, Analytics, emerging Cognitive & AI but also traditional ERP workloads that run entire businesses – SAP ECC, SAP HANA and Oracle EBS are probably the most common that I see as there are also some industry specific ones for Industrial and automotive companies – I’m thinking of Infor.

When a new project comes up, there is little thought given to the platform. either the business or maybe the ISV will state what and how many of server X should be ordered. The parts arrive, eventually getting deployed.  Little consideration is given to the total cost of ownership or the the impact to the business caused by the system complexity.

I’ve watched a client move their Oracle workloads to IBM POWER several years ago. This allowed them to reduce their software licensing and annual maintenance cost as well as to redeploy licensing to other projects – cost avoidance by not having to add net new licensing.  As it happens in business, people moved on, out and up. New people came in whose answer to everything was Intel + VMware.  Yes, a combination wrench.

If any of you have used a combination wrench,  you know there are a few times it is the proper tool. However, it can also strip or round over the head of a bolt or nut if too much pressure or torque is applied. Sometimes the proper tool is a SAE or Metric box wrench, possible a socket, even an impact wrench.  In this clients case, they have started to move their Oracle workloads from POWER to Intel.  Workloads currently running on standalone servers or at most using 2-node PowerHA clusters.  Moving these simple (little complexity) Oracle VM’s to 6-node VMware Oracle RAC clusters that have now grown to 8-nodes.  Because we all know that Oracle RAC scales really well (please tell me you picked up on the sarcasm).

I heard from the business earlier this year that they had to buy over $5M of net-new Oracle licensing for this new environment. Because of this unforeseen expense, they are moving other commercial products to open-source since we all know that open-source is “free” to offset the Oracle cost.

Oh, I forgot to mention.  That 8-node VMWare Oracle RAC cluster is crashing virtually every day.  I guess they are putting too much pressure on the combination wrench!

HPE; there you go again! Part 2

Update on Sept 05, 2016: I split-up the original blog (Part 1) into two to allow for easier reading.

The topic that started the discussion is a blog by Jeff Kyle, HPE Director of Mission Critical Systems promoting his Superdome X server at the expense, using straw men and simply factually incorrect information to base his arguments on.

Now it’s my turn to respond to all of  Jeff’s FUD.

  • Let’s start with this. My favorite topic right now which is to finally have an acknowledgement that Intel customers using VMware running Oracle Enterprise Edition Software products licensed by core have a problem.
    • VCE President, Chad Sakac pens his open letter to Larry at Oracle to take his jackboot off the necks of his VMware people.
    • Read my blog response
  • VMware’s Oracle problem is this. Oracle’s position is essentially if customers are running any Oracle Enterprise product licensed by core on a server running VMware, managed by vCenter then ALL (yes, ALL) servers in the cluster that are under that vCenter manager environment, should and MUST be licensed for ALL of the Oracle products running on that one server. Preposterous or not,  it is not my fight. Obviously, VMware & Intel server vendors who are having their sales impacted by this are not happy. Oracle, which offers an x86 alternative in the form of Exadata and Oracle Database Appliance offer their own virtualization capabilities that is NOT VMware based and which clients do NOT have to license all of the cores, only those being used.
  • VCE & House of Bricks, via a VCE sponsored whitepaper are encouraging customers to stand up to Oracle during contract negotiations, audits and in general to take the position that your business will only pay for the cores and thus the licenses with which are running Oracle Enterprise products. Of course, VCE, nor HoB nor any other Intel vendor that I have read about is providing any indemnification to customers who stand up to Oracle, found out of compliant with fines, penalties and fee’s.  They have the choice to pay up or fight in court.  Yes, it’s the right thing to do but keep in mind that Oracle is a company predisposed to litigate.
  • Yes, I agree that Software licensing & maintenance costs are one of the largest cost items to a business. Far higher than infrastructure, yet Intel vendors wouldn’t have you believe that.
  • IBM Power servers have several “forward looking Cloud deployment” technologies
    • Open source products like PowerVC built on OpenStack manages an enterprise wide Power environment that integrates into a customer’s OpenStack environment.
    • IBM Cloud PowerVC Manager, also built on OpenStack provides clients with enterprise wide private cloud capabilities.
    • Both PowerVC and Cloud PowerVC Manager integrate with VMware’s vRealize allowing vCenter to manage a Power environment.
    • If that isn’t enough, using IBM Cloud Orchestrator, clients can manage a heterogeneous compute & storage platform, both on-prem, hybrid or exclusively in the cloud.
    • IBM will announce additional capabilities on September 8, 2016 that deliver more features to Power environments.
  • “Proprietary chips” – so boring. What does that mean?
    • Let’s look at Intel as they continue to close their own ecosystem. They bought an FPGA company with plans to integrate it into their silicon. Instead of working with GPU partners like NVIDIA & AMD, they developed their own GPU offering called Knights Landing.  Long ago they integrated Ethernet controllers into their chips, and depending on the chip model, graphics capability. They build SSD’s, attempted to build mobile processors and my last example of them closing their ecosystem is Intel’s effort to build their own hi-speed, low latency communication transport called OmniPath instead of working with hi-speed Ethernet & InfiniBand partners like Mellanox.  Of course, unlike Mellanox which provide offload processing capabilities on their adapters, true to form Intel’s OmniPath does not thus requiring them to use the main processing cores to service the hi-speed ethernet traffic.  Wow, that should be some unpredictable utilization and increased core consumption…..which simply drives more servers and more licenses.
    • Now let’s look at what Power is doing. IBM has opened up the Power Architecture specification to the OpenPOWER Foundation. Power.org is still the governing body but OpenPOWER is designed to encourage development & collaboration under a liberal license to build a broad ecosystem.  Unlike Intel which is excluding more and more partners, OpenPOWER has partner companies building Power chips, systems, not to mention peripherals and software.
    • I’ll spare you from looking up the definition of Open & Proprietary as I think it is clear who is really open and who is proprietary.
  • Here is how the typical Intel “used car” salesman sells Oracle on x86: “Hey Steve, did you know that Oracle has a licensing factor of .5 on Intel while Power is 1.0? Yep, Power is twice as much. You know that means you will pay twice as much for your Oracle licenses! Hey, let’s go get a beer!”
    • What Jeff is forgetting to tell you or simply does not know is that except for this unique example with Pella with Oracle running on a Superdome X server because of its nPAR capability, as most customers do not run Oracle on the larger Intel servers like that which may offer a hardware partitioning feature which allows for reduced licensing. They typically run it on 2 or 4 socket servers.
    • The Superdome X server supports two types of partitioning that were carry overs from the original Superdome (Itanium) servers. vPARs and nPARs are both considered Hard Partitioning and thus, both allow the system to be configured into smaller groups of resources.  This allows only those cores to be licensed, then adhering to Oracle licensing rules.
    • HPE provides the Pella case study which states they have a 40 core partition separated from other cores on the server using nPAR technology that appears like server although made up of 2 blades. nPAR’s separate resources along “cell board” boundaries, which are the equivalent of an entire 2 socket blade.  Pella’s Primary Oracle environment runs with 2 blades, each with 2 x 1o cores totaling 40 cores. These two production blades with 40 cores & 20 Oracle licenses sit alongside 2 other blades in one data center while at the failover site sits another HP SD-X chassis. I wonder if Pella realizes the inefficiency of the Superdome X solution. Every Intel server has a compromise. Traditional Scale-out 1 & 2 socket servers have compromises with scalability, performance & RAS. Traditional Scale-up 4 socket & larger Intel servers have compromises with scalability, performance and RAS as well.  Each Superdome X blade has a Xbar controller plus the SX3000 Fabric bus. For this 4 socket NUMA server to act like one server, it will require 8 hops for every off-blade remote memory access.  Further, if the 2nd blade isn’t in the same slot number scheme, such as blade 1 in slot 1 and blade 2 in slot 3, then performance would be further degraded. Do you see what I mean by Intel servers having land mines with every step?
    • The Pella case study says the failover database server uses a single blade consisting of 30 cores.  Not sure how they are doing that if they are using E7_v3 or E7_v4 processors as there is not a 15 core option.  There is a E7_v2 (Ivy Bridge) 15 core option but doubt they would use it.  This single Oracle DB failover blade sits with additional 2 blades.  The fewest Oracle licenses you could pay for on the combined 4 socket or 40 core blade, assuming it is using 2 x 10 core chips per blade is 20 Oracle Licenses.  So, even if the workload ONLY requires 8, 12, 16 or 18 cores the customer would still pay for 20 Licenses.
    • This so-called $200,000 in Oracle licensing savings really is nothing, it really isn’t.  I just showed a customer running Oracle EBS with Linux on Dell how they would save $2.5M per year in Oracle maintenance cost if they moved the workload to AIX on POWER8.  If they would have deployed the solution on AIX to begin with, factoring in the 5 year TCO difference for what they are paying with Intel compared to POWER, this customer would have avoided spending $21M – let that sink in.
    • I do not intend to be disrespectful to Pella, but if you would have put the Oracle workloads running on the older HP SuperDome  onto POWER8 in 2015, you would not have bought a single Oracle license. I could guaranteed that you would have given licenses back, if desired. Not only would you avoid buying licenses, after returning licenses, you would save the 22% maintenance for each returned license.
    • Look at one of my old blogs where I give some Oracle Licensing examples comparing licensing costs for Intel vs Power. It is typical for what I regularly see with clients, if not even greater consolidation ratio’s and subsequent license reductions results.
    • The Pella case study does not mention if the new Superdome X solution uses any virtualization technology.  I can only assume it does not since it was not mentioned.  With IBM Power servers running AIX, all POWER servers come with virtualization (note I said “running AIX”).  With Power, the customer could add/remove cores & memory. They could add & remove I/O to any LPAR (LPAR = VM) while doing concurrent maintenance on the I/O path out-of-band via dual VIOS, move that VM from one server to another live…maybe it is only used when upgrading to the next generation of server …. you know, POWER9; the next generation that would deliver to Pella even more performance per core, allowing them to return more Oracle licenses, saving even more money.
  • This comes back to the “Granddaddy” statement Jeff made. Power servers have a license factor of 1.0 but with POWER server technology, customers ONLY license the cores used by Oracle. You can create dedicated VM’s where you only license those cores regardless of how many are in the server. Another option is to create a shared processor pool (SPP) and without going into all of the permutations, let’s simply say you ONLY license the cores used in the pool not to exceed the # of cores in the SPP.  However, what is different from the dedicated VM is that within the SPP, there could be 1 – N VM’s sharing those cores and thus sharing the Oracle licenses.
  • I did some analysis that I also use with my SAP HANA on POWER discussion where I show processor performance by core has increased each generation starting with POWER 5 all the way to POWER8. With POWER9 details just discussed at Hot Chips 2016 earlier this month (August), we can now expect it to deliver a healthy increase over POWER8 as well.  Thus, P5 to P5+ saw an increase in per core performance. P5+ to P6 to P6+ to P7 to P7+ to P8 all saw successive increases in per core performance. Contrast that to Intel and reference a recent blog I wrote titled “Intel, the Great Charade”.  Look at the first generation Nehalem called Gainestown which delivered a per core performance rating (as provided by Intel ) of .29. The next release was Westmere with a rating of .33. After that was Sandy Bride at .32 followed by Ivy Bridge at .31 then Haswell at .29 and the latest per core performance rating of .29.  What does this mean? In 2016, the per core performance is the same as it was for a processor in the 2007 timeframe. Yes, they have more cores per socket – but I’ll ask you; how are you paying for your Oracle, by core or by socket?
  • Next, IBM Power servers that run AIX, which is what would primarily run Oracle Enterprise Edition Database, run on servers with PowerVM which is the software suite that exploits Power Hypervisor. This is highly efficient and reliable firmware.  Part of this efficiency is how it shares and thus dispatches unused processor cycles between VM’s not to mention the availability of 8 hardware threads per core, clock speeds up to 4.5 GHz, At least 2X greater L1 & L2 caches. 3.5X greater L3 cache and 100% more L4 cache over Intel.  What does this mean? What it means is that Power does more than just beat Intel by 2X.  That is what I call a “foot race”.   When you factor in the virtualization efficiency you start to get processing efficiencies approaching 5:1, 10:1, even higher.
  • I like to tell the story of a customer I had running Oracle EBS across two sites. It had additional Oracle products: RAC and WebLogic but this example will focus just on Location 1 and on Oracle Enterprise Edition Database.  Customer was evaluating a Cisco UCS that was part of a vBlock, an Oracle Exadata and a IBM POWER7+ server. I’ll exclude Exadata, again because of some complexities it has with licensing where it skews the results against other Intel servers, just know the POWER7+ solution kicked its ass.  With the vBlock, a VCE engineer sized the server & core requirements for the 2 site solution.  Looking just at Location 1 for Oracle Enterprise Edition DB, the VCE engineer determined 300 Intel cores were required for Oracle EE DB.  All of these workloads required varying degrees of cores; 7 cores in one server rounded up to 10.  Another server required 4 cores that was rounded up to 6 or maybe 8 cores. Repeat this for dozens of workloads.  Just to reiterate that VCE did the sizing as I did the POWER7+ sizing independent from VCE, completing mine first for that matter.  My sizing demonstrated only 30 x POWER7+ cores.  That was 300 Intel cores or 150 Oracle Licenses compared to 30 x POWER cores or 30 Oracle Licenses.  If my memory serves me correctly, the hard core requirement for the Intel workload on the vBlock was around 168 cores which still would have been 84 Oracle Licenses.  This customer was receiving a 75% discount from Oracle and even with this the difference in licensing cost (Oracle EE DB, RAC & WebLogic for 2 sites) was somewhere around $10-12M.  Factor in the 22% annual maintenance and the 5 year TCO for the Intel solution ONLY for the Oracle software was around $20M vs $5-6M on POWER.  By the way, the hardware solution cost differences were relatively close in price; within several $100K.

I know we are discussing Oracle on Intel but wanted to share this SAP 2-tier S&D performance comparison between 4, 8 and 16 socket Intel servers’ vs 2 & 8 socket POWER8 servers.  I use this benchmark as I find it is one of the more reliable system wide test.  Many of the benchmarks are focused in specific areas such as floating point or integer but not transactional data involving compute, memory movement and active I/O.

SAP2tier_compare

Note in the results the 4 socket Haswell server outperform the newer Broadwell 4 socket server. Next, notice the 8 socket Haswell server outperform the newer 8 socket Broadwell 8 socket server. Lastly, notice the 2 x 16 core results, both which are on a HP Superdome X server.  Using the SAP benchmark measurement of SAPS, it shows the lowest amount of SAPS per core compared to any of the Intel servers shown. Actually, do you notice another pattern? The 4 sockets show greater efficiency over the 8 socket servers which show greater efficiency over the 16 socket servers.

Contrast that to the 2 socket POWER8 server, which by the way is 2X the best Intel result.  If the trend we just reviewed with the Intel servers above holds true, we would expect the 8 socket POWER8 result to show fewer SAPS per core than the 2 socket POWER8 server. We already know the answer because it was highlighted in green as it was the highest result that was roughly 13% greater than the 2 socket POWER8.   The 8 socket POWER8 was also 2.X+ greater than any of the Intel servers and 2.8X greater than the 16 socket HP Superdome X servers specifically.

Here comes my close – let’s see if I do a better job than Jeff!

  • My last point is this in response to Jeff’s statement that “There’s a compelling alternative. A “scale-up” (high capacity) x86 architecture with a large memory capacity for in-memory compute models can dramatically improve database performance and lower TCO.”
    • I’ve already debunked the myth and simply false statements that running Oracle on POWER costs more than Intel. In fact, it is just the opposite, and by a significant amount.
    • Also, as shown in the HPE whitepaper “How memory RAS technologies can enhance the uptime of HPE ProLiant servers” they state “It might surprise you to know that memory device failures are far and away the most frequent type of failure for scale-up servers.”. It is amazing how HPE talks out of both sides of their mouth.  Memory fails the most of any component in HPE servers yet they suggest you to buy these large scale-up servers that hold more memory, host and run more workloads such as “in-memory”  from  SAP HANA, Oracle 12c In-Memory or DB2 with BLU Acceleration.  While in their own publishing’s they acknowledge it is the part most likely to fail in their solution.
    • UPDATE: There is a better alternative to HPE Superdome X, Scale-up, Scale-out or any other Intel based server.  That alternative has higher processor performance, larger memory bandwidth, a (much) more reliable memory subsystem as well as overall system RAS capabilities with a full suite of virtualization abilities. That alternative is an IBM Server, specifically POWER8 available in open source 1 & 2 socket configurations (look at LC models), scale-out 1 & 2 models & here (look a L models) and scale-up 4 – 16 socket Enterprise models.  I’ll discuss more about HPE & IBM’s memory features in my next blog.

Your Honor, members of the jury, these are the facts as presented to you.  I leave it to you  to come back with the correct decision – Jeff Kyle and HPE are guilty of misleading customers and propagating untruths about IBM POWER.

Case closed!

 

Power investments continue to pay off!

This article is to highlight the announcements IBM is making in their Fourth Quarter 2015 related to the Power platform.  This is one of the largest announcements in years that I can recall touching Linux, IBM i, AIX, virtualization, management, ISV’s and the platform itself.  Since I am not an IBMer with access to schedules there may be a few things that differ.

First,  you will want to register.  This is a virtual event which is convenient as you can not only register at anytime but also watch it at anytime online.  Register at https://engage.vevent.com/index.jsp?eid=556&seid=80414&code=Social_Tiles.

As a Business Partner I am glad to see that IBM is delivering on what they told us over the past several years.  They are taking their $B investment delivering useful and leading technologies  with Linux on Power as is needed but also with AIX and IBM i.  These latter two Power pillars are far more mature and do not require the technology enhancements nor the ISV adoption like Linux on Power (LoP) requires.  Stands to reason there will be more activity around the LoP space, not because that is the future and the others will diminish but  for what I mentioned, it is less mature relative to the enterprise AIX & IBM i markets.

This is the extensive list of features being announced this quarter.  I will add a reference section after the announcement(s) to allow you to get more information on each of the features.

AIX

  • AIX 7.2 – some really good features!
    • “Live Update” or apply AIX updates concurrently without requiring a reboot
    • RDSv3 over RoCE optimizes Oracle RAC performance using Oracle RDSv3 protocol with Mellanox Connect RoCE adapters (up to 40 Gb)
    • Workload optimization with Flash
    • Dynamic System Optimizer
    • BigFix Lifecycle for automated and simplified patching
  • New AIX Enterprise Edition packaging
  • AIX 6.1 Withdrawal from Marketing April 2016

IBM i

  • New IBM i v7.1 TR11
  • New IBM i v7.2 TR3
  • S822 expanded capabilities – supports IBM i
    • Requires VIOS for I/O

Virtualization – PowerVM

  • New VIOS release – v2.2.4 based on AIX 7
  • NovaLink architecture provides scalability features for OpenStack deployments
  • New SRIOV capabilities
  • Introducing vNIC Adapter – increases performance with SRIOV
  • Shared Storage Pool enhancements

HMC

  • New HMC model – CR9
  • New HMC version – 8.8.4
  • New virtual HMC offering – Run 8.8.4 in a VMware or RHEV VM (x86)

Power platform

  • New Power8 firmware release – 840 or 8.4 level
  • New PCIe adapters
  • PurePower enhancements
    •  IBM i support
    • vHMC support
    • PurePower Integrated Manager improvements
    • Order  both S822 & S822L with initial order

Management

  • New PowerVC 1.3 version – more management & OpenStack integration features
    • Advanced policy-based management
    • Supports MSPP
    • Expanded vSCSI & NPIV support for certain storage models
  • Manage Power servers using PowerVC with OpenStack with VMware’s vRealize

Security

  • PowerSC NERC Profile compliments existing PCI, DOD STIG, HIPPA, SOX-COBIT

High Availability

  • New PowerHA 7.2 version
    • Integrates Power Enterprise Pools as part of a PowerHA failover operation
    • Improved integration with LPM
  • Non-disruptive upgrade
  • Integrates with new AIX Live Update feature
  • New wizard to use GLVM for low cost mirroring option
  • Enhanced EMC SRDF support
  • Supported on AIX 6.1 TL9 and later
  • Supports Power6 and new servers

Linux

  • New Power Linux server models  – true price parity with Intel servers. Built on OpenPOWER
    • S822LC – up to 20 cores, 1 TB, 2 SFF HDD & 5 PCIe slot 2U server
    • S812LC – up to 10 cores, 1 TB, 14 LFF HDD & 4 PCIe slot 2U server
  • PowerKVM features
    • Dynamically add/remove cpu & memory resources from VM’s
    • Live Migration
  • IFL enhancements – IFL’s run IBM software in a Linux VM on 4 socket & larger Power servers with a 70 PVU vs 100 or 120

Performance

  • New CAPI offerings
  • New SSD offerings – Gen 4 drives, higher performance & capacities
  • 36 port EDR 100 Gb/s Infiniband Switch delivering latency as low as 130 ns

ISV & Software

  • New Linux ISV partnerships – More & more ISV’s are coming to IBM asking to be a part of the Power market revolution taking place
  • SAP HANA announcements
  • New BigInsights 4.1 features with Hadoop & Spark
  • PureApp now available with Power8 servers (announced July 2015)

Cloud

  • SoftLayer announcements – Linux on Power bare metal offerings
  • Power Enterprise Pool enhancements

The above list is fairly complete although lacking a lot of detail which is available online in the announcement letters or better yet contact your IBM Power Sales Specialist or Business Partner.  If your Business Partner is not proactively offering to keep you updated on these types of announcements you may want to reevaluate what value your Value Added Re-seller is providing and look for another.  Don’t settle for an order taker but a technology enabler.

IBM continues to deliver innovation, value, solutions and options to the “Good Enough” alternative with Intel where it has become obvious over their last 2 chip releases they are taking  customers for granted.  Hear how the performance of the Power8 processor gives  the equivalent of a 35 PVU vs Intel’s 70 PVU (this is a PowerMan example and not IBM itself). With IBM Software this is an immediate 50% reduction in licensing and maintenance costs.  Factor in the hypervisor efficiency and that should increase significantly.

Who doesn’t want more performance, more reliability for the same price as the competition? You can have it  your way with IBM Power8 & OpenPOWER.

 

Are you keeping score?

However, unlike Oracle and Intel, who in their own ways are both becoming very rigid and proprietary. IBM is going in the opposite direction by actually opening up their own technology as well as embracing the open source community.

Are you keeping up with all of the changes taking place with IBM’s Power portfolio over the last 2 years?  How could you as there have been so many changes to just about every area on the platform.  Just in case, let’s review some of the features and changes.  Now, since I don’t work at IBM, don’t have a magic 8 ball or a direct line to Doug Balog, IBM’s Power General Manager there is a chance I will get something wrong and I’m sure I’ll leave a few things out.  What I will do though, is not talk about any upcoming products – which between you and me – They will be freakin awesome!

  • Power8 processors support both Big Endian and Little Endian
  • Little Endian offerings are with Linux: RedHat 7.1, SUSE 12 & Ubuntu 14 & 15
  • Big Endian offerings are AIX, IBM i as well as Linux (SUSE 11 & RedHat 6.5 & 7.1)
  • PowerVM supports both BE and LE Operating Systems (concurrently)
  • IBM introduced an alternative hypervisor to PowerVM; an open source alternative based on KVM
  • PowerKVM on Scale-Out models support both BE and LE Linux
  • Still supports the use of VIOS for PowerVM environments with HMC enhancements
  • Uses Kimchi to manage PowerKVM environments
  • SR-IOV adapters (finally) available
  • Nice set of quad port Ethernet (2×10+2x1Gb) adapters available for AIX & Linux workloads
  • 40 Gb RoCE adapters
  • PowerVC based on OpenStack replaces Systems Director (portions of it)
  • PowerVC integrates with VMware’s vRealize allowing it to manage & provision Power Systems
  • Gen 2 of a Converged solution called PurePower using S822(L) servers + V7000 + Mellanox switches
  • 2X performance increase over Power7. Greatest increase that I recall from 1 gen to the next
  • 3X greater memory bandwidth for Scale-Out over Power7/7+ Entry servers
  • Just under 2X greater memory bandwidth for Power8 Enterprise vs Power7/7+ Enterprise servers
  • 3X greater I/O bandwidth for all Power8 servers over all Power7/7+ servers
  • 2X more L2 cache
  • Addition of L4 cache
  • SMT8 that is dynamic per VM unlike Intel with their static 2 way Hyperthreading
  • Up to 1 TB Ram per socket
  • Centaur buffer supports DDR3 & DDR4 memory
  • Scale-Out servers (S & L models) use Enterprise Memory just like the Enterprise servers
  • Significantly more Fault Isolation Registers & Checkers
  • Significant reliability enhancements to processor, cache, memory & I/O subsystems
  • Gen3 I/O Drawers with more PCIe slots per drawer
  • Gen3 PCIe slots (x8 & x16) in all servers
  • Available split backplane in ALL Scale-out servers – YEAH!
  • Significant increase in the number of internal Scale-Out server disk slots: 8 – 18 slots
  • 6 or 8 x 1.8″ disk slots (model dependent) using IBM’s Award Winning (Is it?) Easy Tier
  • Hot swap PCIe slots in ALL models
  • Scale-Out internal disk RAID cache increased from 175 MB in Power7 to 7 GB
  • PowerVM, PowerKVM and RHEV hypervisor options
  • Bare-Metal server option (certain models)
  • Enhanced Voltage Regulator Modules (VRM)
  • Enhanced EnergyScale
  • Some models lowered their software tier from Large to Medium and Medium to Small
  • Some servers received improved warranties
  • Power Enterprise Pools with Mobile Cores & Mobile Memory
  • Integrated Facility for Linux (IFL’s) on E850, E870 & E880
  • L model servers that use PowerVM is called PowerVM Linux Edition equivalent to Enterprise Edition
  • Supported with SAP HANA
  • All non-L models supported with EPIC
  • All Enterprise servers supported with EPIC
  • Delivering roughly 2X performance increase over Intel Haswell EP/EX processors
  • Power8 has 3X greater memory bandwidth than Intel Haswell EP memory in Lock-Step mode
  • Power8 has 2.5(ish) greater memory bandwidth than Intel Haswell EX memory in Lock-Step mode
  • Power8 has 2X greater L1 cache
  • Power8 has 2X greater L2 cache
  • Power8 has 2.5X greater L3 cache
  • Intel has no L4 cache
  • Intel has significantly fewer Checkers
  • Intel does not use anything similar to Fault Isolation Registers
  • Intel 1 & 2 socket servers do not support Machine Check Architecture (MCA)
  • Only Intel 4 socket & greater servers support Machine Check Architecture (MCA)
  • Intel rates their memory capabilities in performance mode which has limited RAS capability
  • For Intel to increase memory resiliency, must use lock-step mode which decreases performance
  • For Intel to increase memory resiliency, can use memory mirroring reducing memory capacity by 1/2
  • All Power8 servers have at least 1 Coherent Accelerator Processor Interface (CAPI)
  • Several CAPI solutions are available today with more coming (very soon – that’s the only hint)
  • Using IBM’s Advanced ToolChain delivers an enhanced SDK & GCC delivering increased results
  • IBM optimized software stack for Power8 delivering greater results per core
  • Optimized ISV stack for Power8 delivering greater results per core
  • ISV’s like MariaDB, WebFocus Express, Redis Labs, ColdFusion, EnterpriseDB, Magento
  • ISV’s like McObject, Veristorm, HelpSystems, SugarCRM, Zato, OpenPro and many more
  • Guaranteed utilization levels from 65 – 80% depending on the model
  • Backbone of Watson
  • Supports Apache Hadoop & Apache Spark
  • Supports BigInsights
  • Supports internal disks for Big Data
  • Supports superior BD storage option using Elastic Storage Server: S822L & GPFS Native Raid
  • Run DB2 10.5 with BLU Acceleration 2X faster than Intel
  • WebSphere & Cognos faster
  • SPSS faster and many more
  • 150+ members in OpenPower Foundation
  • Blue badged Power based solutions
  • Non-blue-badged Power based solutions available
  • PowerVP is awesome
  • Virtual HMC is coming (was a IBM SOD)

I tried to create an exhaustive list to make the point that IBM is investing, engineering and building game changing technology to help customers solve real solutions. However, unlike Oracle and Intel, who in their own ways  are both becoming very rigid and proprietary.  IBM is going in the opposite direction by actually opening up their own technology as well as embracing the open source community.  The list of ISV’s, the use of OpenStack, the use of a open source hypervisor and the most recent announcement whereby VMware’s vRealize will be able to provision & manage Power Systems is a testimony to the change taking place within IBM and with customers.  Expect more of the above and more beyond this. Expect it to blow your mind.

I may come back over time and enhance this list if I get froggy.  Of course, if you are a customer you could always invite me to speak with you and I would be happy to discuss any and all of this.

What would you like to see in addition to the extensive list above?

IBM Power Systems & VMware; Why Not!

IBM and VMware made an announcement that many would never have anticipated.

On August 31, 2015 at VMware’s VMworld conference in San Francisco both IBM and VMware made an announcement that many would never have anticipated.  In support of private and hybrid clouds, VMware’s vRealize Automation is now able to provision and manage virtual machines (LPAR’s in traditional parlance) on IBM’s Power Systems.

Details are sparse as I write this, as well as what I have found from IBM and other online sources.  Here is a IBM blog with most of the details.  I won’t repeat what it says as it is not the focus of this article.

The focus of this blog is on why wouldn’t VMware want to do this and why shouldn’t the traditional VMware shop embrace this first step?  First, it is further evidence by both companies as they weave their unique products with open source OpenStack to deliver greater features with expanded capabilities to customers.  I am not very versed in vRealize so I’ll take the position of speaking about it in generalities. However, I am very well versed in IBM’s PowerVC which is based on the OpenStack framework.  This site management tool for Power Systems provides for image management (fancy talk for OS backup & provisioning), placement policies (aka affinity), full stack configuration of VM’s (ability to configure networking & storage), VM replication, Remote Restart (excellent VM recovery feature) and much, much more.

Because it is based on OpenStack, not only can it be enhanced to suit a customers environment via industry standard API, it can also be integrated into a customers management solution.  Enter VMware’s vRealize Automation product.  This is where the vRealize people say “yep, I know how that works” and where the people unfamiliar with vRealize “google it”.

This creates an expanded market for VMware for platform management.  We can speculate if this is all that IBM & VMware have up their sleeves OR if this is just the first of many future announcements – I don’t know but I’m erring on the side of this being the first of many future announcements.  It also gives them a bump in terms of promoting their cloud solution to customers by claiming it can support not just x86 technology but also  Big Iron like Power and System Z (yes, they announced support for it today as well but you will have to read a Z blog for more details on that).

For customers, this delivers a significant endorsement on the viability of IBM Power Systems. The x86 vendors have for years labeled RISC systems as legacy and dying. Although true for HP and SPARC Unix platforms (If I used RISC because of HP’s Itanium I would surely get criticized so I jumped to UNIX to avoid it) it has not been true for IBM.  That change happened during the Power6 to Power7 model change.  Power6 was the last of the Big Iron systems with the Big Iron mentality.  Power7 began to shift towards the new battlefront consisting of x86 based technologies.  With Power8 the shift by IBM to not only be competitive but a leader in many categories was firmly in place. Platform openness with hypervisor choice (PowerVM, PowerKVM, RHEV & Bare Metal) and OS flexibility using traditional Big Endian AIX, IBM i and Linux (RedHat v6.5 & v7.1 and SUSE 11) as well as Little Endian Linux (RedHat v7.1, SUSE 12 and Ubuntu 14). Systems management using PowerVC based on OpenStack abandoning its own and long in tooth Systems Director product.  Even using Nagio’s in IBM’s newest secure, converged and integrated PurePower ready to deploy cloud solution.  Additional features such as reliability, security, virtualization efficiency, serviceability, significant performance increases and cost competitiveness.  Add in their technology sharing with the 150+ member strong OpenPower Foundation delivering a open platform for the community to develop on and develop to solutions that Intel specifically has continued to close out.

Now with POWER8 delivering roughly 2X the performance per core running the same Linux byte ordering as what is available on x86 systems customers have a choice. Is ‘Good Enough’ good enough when I can get better for the same price with all of the features mentioned above with greater performance that can now be managed by my preferred management platform from VMware.  This doesn’t mean to imply that x86 is going away anytime soon or that Power will overtake the data center overnight.  It does mean to imply though that customers now have choices and it doesn’t have to be just one x86 vendor against another but also with a superior architecture.

Yes, this was very smart by VMware as they win no matter what the customer decides to do.  It was also very smart for IBM as it should mean increased adoption of their Power platform into shops that would otherwise avoid it for lack of integration into the VMware stack.  Both companies will then get the chance to introduce customers to other products in their portfolio which means they are already in the door….that is half the battle!