SAP HANA – could I have extra complexity please?

Just returned from IBM’s Systems Technical University conference held in Orlando having delivered presentations on 4 different topics.

  1. Benefits of SAP HANA on POWER vs Intel
  2. Why IBM POWER systems are datacenter leaders
  3. Only platform that controls Software Licensing
  4. Why DB2 beats Oracle on POWER (implied that it beats Intel).

With the SAP Sapphire conference last week in Orlando, there was a slew of announcements.  Quick reminder for the uninitiated with SAP HANA, that it is ONLY supported on Intel and POWER based systems running one OS; SUSE or RedHat Linux. With that, IBM POWER continues to deliver the best value.

What is the value offered with the POWER stack? Flexibility! It really is that simple.  If I had a mic on the plane as I write this, I would drop it. Conversely, what is the value offered going with an Intel stack? Compromise!

Some of the flexibility offered thru IBM POWER systems are: Scale-up, scale-out, complete virtualization, grow, shrink, move, perform concurrent maintenance, mix workloads: existing ECC workloads on AIX or IBM i with new HANA running Linux all on the same server.  All of this runs using the most resilient HANA platform available.

Why do I label Intel systems as “Compromise” solutions? It isn’t a competitive shot nor FUD.  Listen, as an Client Executive and Executive Architect for an Channel Reseller, I am able to offer my clients solutions from multiple vendors that include IBM POWER and Intel based systems manufacturers.  I’ve made the conscious decision though to promote IBM POWER over Intel.  Why? Because I not only believe in the capabilities of the platform but also having worked with some of the largest companies in the world, I regularly hear and see the impact running Enterprise workloads on Intel based servers has on the business.

If you read my previous blog, I mention a client who just recently moved their Oracle workloads from POWER to Intel.  Within months, they’ve had to buy over $5M in new licenses going from a simple standalone and a few 2-node clusters (all on the same servers) to an 8-node VMware based Oracle RAC cluster.  This environment is having daily stability issues significantly impacting their business.  Yes, their decision to standardize on a single platform has introduced complexity to the business costing them money, resources (exhausted & not having the proper skills to manage the complexity) that impacts their end-users.

The “Compromise” I mention to host SAP HANA on Intel is that everything has to be an asterisk by it – in other words a limitation or restriction – everything requires follow-up questions and research to ensure what the business wants to do, can be done. Here are some examples.
1) VMware vSphere 5.5 initially supported 1 VM per system which has now been increased to 4 VM’s, but with many qualifications.
a) Restricted to 2 & 4 socket Intel servers
1) VM’s are limited to a socket
2) 2 socket server ONLY supports 2 VM’s, 4 socket would be 4 x 1 sockets each
b) Only E5_v2, E5_v3, E7_v2 and E7_v3 chips are supported – NO Broadwell
c) Want to redeploy capacity for other? Appliances certified only for SoH or S4H
uses cannot be used for other purposes such as BW
d) Did I mention, those VM’s are also limited to 64 vCPU and 1 TB of memory each
e) If a VM needs more memory than what is attached to that socket? No problem, you have to add an additional socket and all of its memory – no sharing!
2) VMware vSphere 6.0 just recently went from 1 to 16 VM’s per system.
a) VM’s are still limited to a socket or 1/2 socket.
b) 1/2 socket isn’t as amazing as it sounds.  Since vSphere supports 2, 4 & 8 socket servers, there can be 16 x 1/2 socket VM’s.
c) What there cannot be, is any combination of VM’s >1 socket with 1/2 socket assigned. In other words, a VM cannot have 1.5 or 3.5 sockets. Any VM resource requirement above 1 socket requires the addition of an entire socket.  1.5 sockets would be 2 sockets.
d) Multi-node setups are NOT permitted …. at all!
e) VM’s larger than 2 sockets cannot use Ivy Bridge based systems, only Haswell or Broadwell chips – but ONLY on 4-socket servers.  Oh my gosh, this is making my head hurt!
f) If using an 8-socket system, it only supports a single production VM using Haswell ONLY processors.  NOT Ivy Bridge and NOT Broadwell!
g) VM’s are limited to 128 vCPU and 4 TB of memory
3) VMware vSphere 6.5 with SAP HANA SPS 12 only supports Intel Broadwell based systems. What if your HANA Appliance is based on Ivy-Bridge or Haswell processor technology? “Where is that Intel rep’s business card? Guess I’ll have to buy another one since I can’t upgrade these”
a) VM’s using >4 sockets are currently NOT supported with these Broadwell chips
b) Now, it gets better. I hope you are writing this down – For 2 OR 8 socket systems, the maximum VM size is 2 sockets.  Only a 4 socket system supports 1 VM with 4 sockets.
c) Same 1/2 socket restrictions as vSphere 6.0.
d) Servers with >8 sockets do NOT permit the use of VMware
e) If your VM requirements exceed 128 vCPU and 4 TB of memory, you must move it to a bare-metal system ….. Call me – I’ll put you on a POWER system where you can scale-up, scale-out without of this mess

Contrast all of these VMware + Intel limitations, restrictions, liabilities, qualification or simply said “Compromise” systems to the IBM Power System.

POWER8 servers run the POWER Hypervisor called PowerVM.  This Hypervisor and its suite of features deliver flexibility allowing all physical, all virtual and a combination of physical & virtual resource usage on each system. Even where there are VM limits such as 4 on the low-end system, that 4 could really be 423 VM’s.  I’m making a theoretical statement here to prove the point. Let’s use a 2 socket 24 core S824 server.  3 VM’s, each with 1 core (yes, I said core) for production usage and the 4th VM’s is really a Shared Processor Pool with 21 cores.  Those 21 cores support up to 20 VM’s per core or 420 VM’s. Any non-production use is permitted.

Each PowerVM VM supports up to 16 TB of memory and 144 cores.  VM size above 108 cores requires the use of SMT4 whereas <=108 cores permit SMT8.  Thus, 144 cores with SMT4 is 576 vCPU’s or 4.5X what Intel can do with 4X the memory footprint.  By the way, that 108 core VM would support 864 vCPU’s – just saying!  Note: I need to verify as the largest SMT8 VM may be 96 cores with only 768 vCPU.

Not only can we allocate physical cores to VM’s and NOT limited to 1/2 or full socket increments like Intel, but POWER systems granularity allows for adjustments at the vCPU level.

PowerVM supports scale-out and scale-up.  Then again, if you have heard or read about the Pfizer story for scale-out BW, you might rethink a literal scale-out approach. Read IBM’s Alfred Freudenberger’s blog on this subject at https://saponpower.wordpress.com/2016/05/26/update-sap-hana-support-for-vmware-ibm-power-systems-and-new-customer-testimonials/

While on the subject of BWoH/B4H, PowerVM supports 6 TB per VM whereas the vSphere 6.0 supports is 3 TB and the limitations increase from here.

Do you see why I choose to promote IBM Power vs Intel? When I walk into a client, the most valuable item I bring with me is my credibility.  HANA on Intel is a constant train wreck with constant changes & gotcha’s. Clients currently with HANA on Intel solutions or better yet, running ECC on Intel have options.  That option is to move to a HANA 2.0 environment using SUSE 12 or RedHat v7 Linux on POWER servers. Each server will host multiple VM’s with greater resiliency providing the business the flexibility desired from the critical business system that likely touches every part of the business.

Does your IT shop use a combination wrench?

More and more, IT shops seem inclined to consolidate and simplify their infrastructure to one platform. A mindset that all workloads can or should run on a single platform incorporated into ‘Software-defined this’ and ‘Software-defined that’.  It tantalizes the decision makers senses as vendors claim to reduce complexity and cost.

Technology has become Ford vs Chevy or John Deere vs Case International.  Whereas these four vendors each have some unique capabilities and offerings they are also leaders in innovation and reliability.  For IT shops, there is this perception that only Intel & VMware are viable infrastructure options to deploy every workload type.  Mission / Life critical workloads in healthcare, high-frequency financial transactions, HPC, Big Data, Analytics, emerging Cognitive & AI but also traditional ERP workloads that run entire businesses – SAP ECC, SAP HANA and Oracle EBS are probably the most common that I see as there are also some industry specific ones for Industrial and automotive companies – I’m thinking of Infor.

When a new project comes up, there is little thought given to the platform. either the business or maybe the ISV will state what and how many of server X should be ordered. The parts arrive, eventually getting deployed.  Little consideration is given to the total cost of ownership or the the impact to the business caused by the system complexity.

I’ve watched a client move their Oracle workloads to IBM POWER several years ago. This allowed them to reduce their software licensing and annual maintenance cost as well as to redeploy licensing to other projects – cost avoidance by not having to add net new licensing.  As it happens in business, people moved on, out and up. New people came in whose answer to everything was Intel + VMware.  Yes, a combination wrench.

If any of you have used a combination wrench,  you know there are a few times it is the proper tool. However, it can also strip or round over the head of a bolt or nut if too much pressure or torque is applied. Sometimes the proper tool is a SAE or Metric box wrench, possible a socket, even an impact wrench.  In this clients case, they have started to move their Oracle workloads from POWER to Intel.  Workloads currently running on standalone servers or at most using 2-node PowerHA clusters.  Moving these simple (little complexity) Oracle VM’s to 6-node VMware Oracle RAC clusters that have now grown to 8-nodes.  Because we all know that Oracle RAC scales really well (please tell me you picked up on the sarcasm).

I heard from the business earlier this year that they had to buy over $5M of net-new Oracle licensing for this new environment. Because of this unforeseen expense, they are moving other commercial products to open-source since we all know that open-source is “free” to offset the Oracle cost.

Oh, I forgot to mention.  That 8-node VMWare Oracle RAC cluster is crashing virtually every day.  I guess they are putting too much pressure on the combination wrench!

Oracle is a mess & customers pay the price!

Chaos that is Oracle

Clients are rapidly adopting open source technologies in support of purpose-built applications while also shifting portions of on-premises workloads to major Cloud providers like Amazon’s AWS, Microsoft’s Azure and IBM’s SoftLayer.  These changes are sending Oracle’s licensing revenue into the tank forcing them to re-tool … I’m being kind saying it this way.

What do we see  Oracle doing these days?

  • Aggressively going after VMware environments who use Oracle Enterprise products for licensing infractions
  • Pushing each of their clients toward Oracle’s public cloud
  • Drastically changing how Oracle is licensed for Authorized Cloud Environments using Intel servers
  • Latest evidence indicates they are set to abandon Solaris and SPARC technology
  • On-going staff layoffs as they shift resources, priorities & funding from on-premises to cloud initiatives

VMware environments

I’ve previously discussed for running Oracle on Intel (vs IBM POWER), Intel & VMware have an Oracle problem. This was acknowledged by Chad Sakac, Dell EMC’s President Converged Division in his August 17, 2016 blog in what really amounted to an Open Letter to King Larry Ellison, himself. I doubt most businesses using Oracle with VMware & Intel servers fully understand the financial implications this has to their business.  Allow me to paraphrase the essence of the note “Larry, take your boot off the necks of our people”.

This is a very contentious topic so I’ll not take a position but will try to briefly explain both sides.  Oracle’s position is simple even though it is very complex.  Oracle does not recognize VMware as an approved partitioning (view it as soft partitioning) method to limit Oracle licensing. As such, clients running Oracle in a VMware environment, regardless of how little or much is used, must properly license it for every Intel server under that clients Enterprise (assume vSphere 6+).  They really do go beyond a rational argument IMHO. Since Oracle owns the software and authored the rules they use these subtleties to lean on clients extracting massive profits despite what the contract may say. An example that comes to mind is how Oracle suddenly changed licensing configurations for Oracle Standard Edition and Standard Edition One. They sunset both of these products as of December 31, 2015 replacing both with Standard Edition 2. What can only be described as screwing clients, they halved the number of sockets allowed on a server or in a RAC cluster, limited the number of cpu threads per DB instance while doubling the number of minimum Named User Plus (NUPs). On behalf of Larry, he apologizes to any 4 socket Oracle Standard Edition users but if you don’t convert to a 2 socket configuration (2 sockets for 1 server or 1 socket for 2 servers using RAC) then be prepared to license the server using the Oracle Enterprise Edition licensing model.

The Intel server vendors and VMware have a different interpretation on how Oracle should be licensed.  I’ll boil their position down to using host or cpu affinity rules.  House of Bricks published a paper that does a good job trying to defend Intel+VMware’s licensing position. In their effort, they do show how fragile of ground they sit on with its approach  highlighting the risks businesses take if they hitch their wagons to HoB, VMware & at least Dell’s recommenations.

This picture, which I believe House of Bricks gets the credit for creating captures the Oracle licensing model for Intel+VMware environments quite well. When you pull your car into a parking garage – you expect to pay for 1 spot yet Oracle says you must pay for every one as you could technically park in any of them. VMware asserts you should only pay for a single floor at most because your vehicle may not be a compact car, may not have the clearance for all levels, there are reserved & handicapped spots which you can’t use. You get the idea.

oracle_parking_garage

It simply a disaster for any business to run Oracle on Intel servers. Oracle wins if you do not virtualize, running each on standalone servers.  Oracle wins if you use VMware, regardless of how little or much you actually us.  Be prepared to pay or to litigate!

Oracle and the “Cloud”

This topic is more difficult to provide sources so I’ll just stick to anecdotal evidence. Take it or leave it. At contract renewal, adding products to contracts or new projects like migrating JD Edwards “World” to “Enterprise One” or a new Oracle EBS deployment would subject a business to an offer like this.  “Listen Bob, you can buy 1000 licenses of XYZ for $10M or you can buy 750 licenses of XYZ for $6M, buy 400 Cloud units for $3M and we will generously throw in 250 licenses …. you’ll still have to pay support of course. You won’t get a better deal Bob, act now!”.  Yes, Oracle is willing to take a hit for the on-premises license revenue while bolstering their cloud sales by simply shuffling the Titanic deck chairs. These clients, for the most part are not interested in the Oracle cloud and will never use it other than to get a better deal during negotiations. Oracle then reports to Wall Street they are having tremendous cloud growth. Just google “oracle cloud fake bookings” to read plenty of evidence to support this.

Licensing in the Cloud

Leave it to Oracle Marketing to find a way to get even deeper into clients wallets – congratulations they’ve found a new way in the “Cloud”.  Oracle charges at least 2X more with Oracle licenses on Intel servers that run in Authorized Cloud Environments (ACE). You do not license Oracle in the cloud using the on-premises licensing factor table.  The more VM’s running in a ACE,  the more you will pay vs an on-premises deployment. To properly license an on-premises Intel server (remember, it is always an underlying proof that Oracle on POWER servers is the best solution) regardless if virtualization is used, assuming a 40 core server, would equal 20 Oracle Licenses (Intel licensing factor for Intel servers is 0.5 per core). Assume 1 VMware server, ignoring it is probably part of a larger vSphere cluster.  Once licensed, clients using VMware could theorectially run Oracle as many VM’s as desired or supported by that server. Over-provision the hell out of it – doesn’t matter. That same workload in an ACE, you pay for what amounts to every core.  Remember, if the core resides on-premises it is 1 Oracle License for every 2 Intel cores but in a ACE it is 1 OL for 1 core.

AWS
Putting your Oracle workload in the cloud?  Oracle license rules stipulate if running in AWS, it labels as vCPU’s both the physical core and the hyperthread. Thus, 2 vCPU = 1 Oracle License (OL). Using the same 40 core Intel server mentioned above, with hyperthreading it would be 80 threads or 80 vCPU.  Using Oracle’s new Cloud licensing guidelines, that would be 40 OL.  If this same server was on-premises, those 40 physical cores (regardless of threads) would be 20 OL ….. do you see it?  The licensing is double!!!   If your AWS vCPU consumption is less vs the on-premises consumption you may be ok. As soon as your consumption goes above that point – well, break out your checkbook.  Let your imagination run wild thinking of the scenarios where you will pay for more licenses in the cloud vs on-prem.

Azure
Since Azure does not use hyperthreading, 1 vCPU = 1 core.  The licensing method for ACE’s for Azure or any other ACE if hyperthreading is not used, 1 vCPU = 1 OL.  If a workload requires 4 vCPU, it requires 4 OL vs the 2 OL if it was on-premises.

Three excellent references to review. The first is Oracle’s Cloud licensing document. The second link is an article by Silicon Angle giving their take of this change and the last link is for a blog by Tim Hall, a DBA and Oracle ACE Director sharing his concerns. Just search for this topic starting from January 2017 and read until you fall asleep.

Oracle
Oracle offers their own cloud and as you might imagine, they do everything they can to favor their own cloud thru licensing, contract negotiations and other means.   From SaaS, IaaS and PaaS their marketing machine says they are second to none whether the competition is SalesForce, Workday, AWS, Azure or any other.  Of course, analysts, media, the internet nor Oracle earnings reports show they are having any meaningful success – to the degree they claim.

Most recently, Oracle gained attention for updating how clients can license Oracle products in ACE’s as mentioned above.  As you might imagine, Oracle licenses its products slightly differently than in competitors clouds but they still penalize Intel and even SPARC clients, who they’ll try to migrate into the cloud running Intel (since it appears Oracle is abandoning SPARC).  The Oracle Cloud offers clients access to its products on a hourly or monthly in a metered and non-metered format on up to 4 different levels of software. Focusing on Oracle DB, the general tiers are Standard, Enterprise, High-Performance and Extreme-Performance Packages. Think of it like Oracle Standard Edition, Enterprise Edition, EE+tools, EE+RAC+tools.  Oracle also defines the hardware tier as “Compute Shapes“. The three tiers are General Purpose, High-Memory or Dedicated compute

Comparing the cost of an on-premises perpetual license for Oracle Enterprise  vs a non-metered monthly license for the Enterprise Tier means they both use Oracle Enterprise Edition Database. Remember a perpetual license is a one-time purchase, $47,500 for EE DB list price plus 22% per year annual maintenance.  The Enterprise tier using a High-memory compute shape in the Oracle cloud is $2325 per month.  This compute shape consists of 1 OCPU (Oracle CPU) or 2 vCPU (2 threads / 1 core).  Yes, just like AWS and Azure, Intel licensing is at best 1.0 vs 0.5 for on-premises licensing per core. Depending how a server might be over-provisioned as well as the fact an on-premises server would be fully licensed with 1/2 of its installed cores there are a couple of ways clients will vastly overpay for Oracle products in any cloud.

The break-even point for a perpetual license + support vs a non-metered Enterprise using High-memory compute shape is 30 months.

  • Perpetual license
    • 1 x Oracle EE DB license = $47,500
    • 22% annual maintenance = $10,450
    • 3 year cost: $78,850
  • Oracle Cloud – non-metered Enterprise using High-Memory shape
    • 1 x OCPU for Enterprise Package for High-Compute = $2325/mo
    • 1 year cloud cost = $27,900
    • 36 month cost: $83,700
  • Cross-over point is at 30 months
    • $79,050 is the 30 month cost in the Cloud
  • An Oracle Cloud license becomes significantly more expensive after this.
    • year 4 for a perpetual license would be $10,470
    • 12 months in year 4 for the Cloud license would be $27,900
    • Annual cost increase for a single cloud license over the perpetual license = $17,430
  • Please make your checks payable to “Larry Ellison”

Oracle revenue’s continue to decline as clients move to purpose-built NoSQL solutions such as MongoDB, RedisLabs, Neo4j, OrientDB, Couchbase as well as SQL based solutions from MariaDB, PostgreSQL (I like EnterpriseDB) even DB2 is a far better value.  Oracle’s idea isn’t to re-tool by innovating, listening to clients to move with the market. No, they get out their big stick – follow the classic mistake so many great clients have done before them which is not evolve while pushing clients until something breaks.   Yes, Boot Hill is full of dead technology companies who failed to innovate and adapt. This is why Oracle is in complete chaos.  Clients beware – you are on their radar!

 

 

C is for Performance!

E850C is a compact power-packed “sweet spot” server!

“C” makes the E850 a BIG deal!

IBM delivered a modest upgrade to the entry level POWER8 Enterprise server going from the E850 to the E850C.  The new features are seen with the processors, memory, Capacity on Demand and with bundled software.

The most exciting features available with the new E850C, which by the way comes with a new MTM of 8408-44E, are with the processors.  You might think I’d say that but here is why the E850C is the new “sweet spot” server for AIX & Linux workloads that require a mix of performance, scalability and reliability features.

A few things that are the same on the E850C as it was the E850.

  • Classified as a “small” tier server
  • Available with a 3 year 24 x 7 warranty
  • PVU for IBM software is 100 when using AIX
  • PVU for IBM software is 70 when using Linux
  • Supports IFL’s or Integrated Facility for Linux
  • Offers CuOD, Trial, Utility and Elastic CoD
  • Does NOT offer mobile cores or mobile memory (boo hiss)
  • Does NOT support Enterprise Pools (boo hiss)

The original 8408-E8E aka E850 was available with 32 cores at 3.72, 40 cores at 3.35 and 48 cores at 3.02 GHz, initially support 2 TB of DDR3 memory and eventually up to 4 TB of DDR3 of memory.  Using up to 4 x 1400W power supplies, due to its dense packaging what it did not offer was the option to exploit EnergyScale allowing users to decrease or increase the processor clock speeds.  The clock speeds were capped at their nominal speeds of 3.72, 3.35 and 3.02 GHz not allowing users to select if one of several options from do nothing to lower or increase based on utilization or lower to a set point and more importantly, increase to the higher rate.  This is free performance – rPerf in the case of AIX.

Focusing on the processor increase, because who the hell wants to run their computers slower, the E850C has a modest increase ranging from 2.5% to 4.6%.  I say modest because the other POWER8 models range from 4% up to 11% <play Tim Allen “grunt” from Home Improvement>.  This modest increase doesn’t matter because the new C model delivers 32 cores at 4.22 nominal increasing to 4.32 GHz, 40 cores at 3.95 nominal increasing to 4.12 GHz and 48 cores at 3.65 nominal increasing to 3.82 GHz.  These speeds are at the high end for every Scale-out server and consistent with on part with the E870C/E880C models.

Putting these performance increases into perspective; comparing nominal rPerf values for the E850 vs E850C show this: 32 core E850C with an increase of 59 rPerf. 40 core E850C with an increase of 88 rPerf and the 48 core E850C delivering a rPerf increase of 113.  By doing nothing but increasing the clock speed, the 48 core E850C is delivering an rPerf increase equivalent to a POWER6 570 with 16 cores.

It hasn’t been mentioned yet but the E850 & E850C uses a 4U chassis. Looking at the 48 core E850C just mentioned, it delivers an rPerf level of 859. Compare this to the 16U POWER7+ 770 (9117-MMD) with 64 cores that delivers only 729 rPerf or going back to the initial 770 model 9117-MMB with 48 cores in a 16U footprint delivering 464 rPerf. Using the MMD values, this is a 4:1 footprint reduction, an 18% increase in rPerf with a 25% reduction in cores – why does that matter? Greater core strength means fewer OS & virtualization licenses & SWMA but more importantly – less enterprise software licensing such as Oracle Enterprise DB.

IBM achieved this a couple of ways. Not being an IBMer, I do not have all of the techniques but by increasing the chip efficiency, increasing the power supplies to 2000W each and moving to DDR4 memory which uses less power.

What else?

Besides the improvement in clock speeds and bumping memory to DDR4, the E850C reduces the number of minimum active cores. Every E850C must have a minimum of 2 processor books; 2×8, 2×10 or 2×12 core  while only requiring 8, 10 or 12 cores being active depending on the model of processor book used.  The E850 required all cores in the first 2 processor books to be active. This change in the E850C is another benefit to clients to get into the “sweet spot” server with a lower entry price.  Same memory activations of 50% of the installed memory or 128 GB whichever is more.

A couple of nice upgrades from the E850 that are now standard. Active Memory Mirroring and PowerVM Enterprise Edition are now standard while still offering a 3 year 24 x 7 warranty (except Japan).

The E850C does not support IBM i, but it does support AIX 6.1, 7.1 and 7.2 (research specific versions at System Software Maps) and the usual Linux distro’s.

Software bundle enhancements over the E850 are:

  • Starter pack for SoftLayer
  • IBM Cloud HMC apps
  • IBM Power to Cloud Rewards
  • PowerVM Enterprise Edition

Even though it isn’t bundled in, consider using IBM Cloud PowerVC Manager, which is included with the AIX Enterprise Edition bundle or à la carte with AIX Standard Edition or any Linux distro.

In summary

The E850C is a power-packed compact package. With up to 48 cores and 4 TB Ram in a 4U footprint, it is denser than 2 x 2U S822’s with 20 cores / 1 TB RAM or the 1 x 4U S824 with 24 cores / 2 TB RAM.  Yes the E870C with 40 cores or the E880C with 48 cores, both with 8 TB of RAM in a single node still require 7U to start with.  If clients require the greatest scalability, performance, flexibility and reliability they should look at the E870C or E880C but for a lower entry price that delivers high performance in a compact solution the E850C delivers the complete package.

 

Not on the Dell/EMC Bandwagon. More of the same. OpenPOWER changes the game!

Reading articles about the two companies consummation on 9/7/16 around social media yesterday, one would think the marriage included a new product or solution which was revolutionizing the industry.  I haven’t heard of any but  I do know that both companies have continued to shed employee’s and sell off assets not core to the go-forward business to capture critical capital to fund the massive $63B deal.  They will also continue to evaluate products from both Dell & EMC’s traditional product portfolios to phase out, merge, sell or kill due to redundancies and other reasons.  It just happens. For them to say otherwise is misleading at best.  Frankly, it hurts their credibility when they deny this as there are examples already of this occurring.

Going forward I do not see how the combined products of Dell, which at its core sell commodity Intel servers that are not even best of breed, but rather the low-cost leader paired with the high-end products from EMC, which had high development cost will be any different on 9/8/16 than it was on 9/6/16.  EMC’s problem of customers moving away from the high margin high-end storage systems to the highly competitive, lower margin All Flash Array products will not be any better for the newly combined company.  This AFA space has many good competitors who offer “Good Enough” features that can offer clients 1) Lower cost 2) Comparable or better features 3) Not a tier-1 player who some customers resist due to feeling they overpay for the privilege to work with them.

About 2 years ago, EMC absorbed VCE with its Converged infrastructure called vBlock, a term I argue it is not but instead is a Integrated Infrastructure built on VMware, Cisco UCS and EMC Storage.  VMware & EMC storage offer nothing unique. UCS is unique in the Intel space but with the messy split from the VCE tri-union and now VCE who is placing a lot of emphasis on their own hyper-converged offerings as well as products from Dell due to this new found marriage.  It only makes sense to de-emphasize Cisco from a VCE solution and start promoting Dell products.  This goes from using the leader in Intel blade solutions to the “me-too” Dell products which is average in a field of “Good Enough” technology whose most notable feature is its low cost.

As I listen to the IBM announcement today that include 3 new OpenPOWER servers I can’t help but wonder how much longer Dell’s low cost advantage will remain.  Not sure what they will use for SAP HANA workloads requiring > 4 socket Intel servers since HPE just bought SGI, primarily for its 32 socket Intel server/technology.  I guess they could partner with Lenovo on their x3950 or with Cisco on their C880 which I believe they actually OEM from Hitachi. Dell servers are woefully inadequate with regard to RAS features; not just against POWER servers but even against other Intel competitors like Lenovo (thanks to their IBM purchase of xSeries), Hitachi and Fujitsu who all have stronger offerings relative to what Dell offers.   RAS features simply cost more which is why you didn’t see IBM with its xSeries, Hitachi or Fujitsu be volume leaders. This is also why you are seeing more software defined solutions built to mask hardware deficiencies. This in itself has its own problems.

Here is a quick review of today’s announcements. The first server is a 2 socket 2U server built for Big Data hosting 12 internal front facing drive slots.  The next server is a 2 socket 1U server offering almost 7K threads in a 42U rack.  It provides tremendous performance for clients looking for data-rich and dense computing.  The 3rd server is a 2 socket 2U server that is the first commercial system to offer NVIDIA‘s NVLink technology connecting 2 or 4 GPU’s directly to each other as well as to the CPU’s.  Every connection is 160 GB/s bi-directional which is roughly 5X what is available on Intel servers using GPU’s connected to PCIe3 adapter slots.

openpower_family_sept2016

These OpenPOWER systems allow clients to build their own solution or as part of a integrated product with storage and management stack built on OpenStack.  Ideal for Big Data, Analytics, HPC, Cloud, DevOps and open source workloads like SugarCRM, NoSQL, MariaDB, PostgreSQL (I like EnterpriseDB for support) or even IBM’s vast software portfolio such as DB2 v11.1.

Pricing for the 3 new OpenPOWER models as well as the first 2 announced earlier in the year is available at Scale-out Linux on page. I recently did a pricing comparison for a customer with several 2 socket Dell servers vs a comparable 2 socket S822LC.  Both the list and web price for the Dell solution were more expensive than OpenPOWER.  The Dell list price was approximately 35% more and the web list price was 10% more and I was using the price as shown on the IBM OpenPOWER page provided in the link in this same paragraph.  Clients looking to deploy large clusters, compute farms or simply want to start lowering infrastructure cost should take a hard look at OpenPOWER.  If you can install Linux on an Intel server,  you have the skills to manage a OpenPOWER server.  Rocket Scientist need not apply!

If you have questions, encourage you to contact your local or favorite business partner.  If you do not have one, I would be happy to work with you.

HPE; there you go again! Part 2

Update on Sept 05, 2016: I split-up the original blog (Part 1) into two to allow for easier reading.

The topic that started the discussion is a blog by Jeff Kyle, HPE Director of Mission Critical Systems promoting his Superdome X server at the expense, using straw men and simply factually incorrect information to base his arguments on.

Now it’s my turn to respond to all of  Jeff’s FUD.

  • Let’s start with this. My favorite topic right now which is to finally have an acknowledgement that Intel customers using VMware running Oracle Enterprise Edition Software products licensed by core have a problem.
    • VCE President, Chad Sakac pens his open letter to Larry at Oracle to take his jackboot off the necks of his VMware people.
    • Read my blog response
  • VMware’s Oracle problem is this. Oracle’s position is essentially if customers are running any Oracle Enterprise product licensed by core on a server running VMware, managed by vCenter then ALL (yes, ALL) servers in the cluster that are under that vCenter manager environment, should and MUST be licensed for ALL of the Oracle products running on that one server. Preposterous or not,  it is not my fight. Obviously, VMware & Intel server vendors who are having their sales impacted by this are not happy. Oracle, which offers an x86 alternative in the form of Exadata and Oracle Database Appliance offer their own virtualization capabilities that is NOT VMware based and which clients do NOT have to license all of the cores, only those being used.
  • VCE & House of Bricks, via a VCE sponsored whitepaper are encouraging customers to stand up to Oracle during contract negotiations, audits and in general to take the position that your business will only pay for the cores and thus the licenses with which are running Oracle Enterprise products. Of course, VCE, nor HoB nor any other Intel vendor that I have read about is providing any indemnification to customers who stand up to Oracle, found out of compliant with fines, penalties and fee’s.  They have the choice to pay up or fight in court.  Yes, it’s the right thing to do but keep in mind that Oracle is a company predisposed to litigate.
  • Yes, I agree that Software licensing & maintenance costs are one of the largest cost items to a business. Far higher than infrastructure, yet Intel vendors wouldn’t have you believe that.
  • IBM Power servers have several “forward looking Cloud deployment” technologies
    • Open source products like PowerVC built on OpenStack manages an enterprise wide Power environment that integrates into a customer’s OpenStack environment.
    • IBM Cloud PowerVC Manager, also built on OpenStack provides clients with enterprise wide private cloud capabilities.
    • Both PowerVC and Cloud PowerVC Manager integrate with VMware’s vRealize allowing vCenter to manage a Power environment.
    • If that isn’t enough, using IBM Cloud Orchestrator, clients can manage a heterogeneous compute & storage platform, both on-prem, hybrid or exclusively in the cloud.
    • IBM will announce additional capabilities on September 8, 2016 that deliver more features to Power environments.
  • “Proprietary chips” – so boring. What does that mean?
    • Let’s look at Intel as they continue to close their own ecosystem. They bought an FPGA company with plans to integrate it into their silicon. Instead of working with GPU partners like NVIDIA & AMD, they developed their own GPU offering called Knights Landing.  Long ago they integrated Ethernet controllers into their chips, and depending on the chip model, graphics capability. They build SSD’s, attempted to build mobile processors and my last example of them closing their ecosystem is Intel’s effort to build their own hi-speed, low latency communication transport called OmniPath instead of working with hi-speed Ethernet & InfiniBand partners like Mellanox.  Of course, unlike Mellanox which provide offload processing capabilities on their adapters, true to form Intel’s OmniPath does not thus requiring them to use the main processing cores to service the hi-speed ethernet traffic.  Wow, that should be some unpredictable utilization and increased core consumption…..which simply drives more servers and more licenses.
    • Now let’s look at what Power is doing. IBM has opened up the Power Architecture specification to the OpenPOWER Foundation. Power.org is still the governing body but OpenPOWER is designed to encourage development & collaboration under a liberal license to build a broad ecosystem.  Unlike Intel which is excluding more and more partners, OpenPOWER has partner companies building Power chips, systems, not to mention peripherals and software.
    • I’ll spare you from looking up the definition of Open & Proprietary as I think it is clear who is really open and who is proprietary.
  • Here is how the typical Intel “used car” salesman sells Oracle on x86: “Hey Steve, did you know that Oracle has a licensing factor of .5 on Intel while Power is 1.0? Yep, Power is twice as much. You know that means you will pay twice as much for your Oracle licenses! Hey, let’s go get a beer!”
    • What Jeff is forgetting to tell you or simply does not know is that except for this unique example with Pella with Oracle running on a Superdome X server because of its nPAR capability, as most customers do not run Oracle on the larger Intel servers like that which may offer a hardware partitioning feature which allows for reduced licensing. They typically run it on 2 or 4 socket servers.
    • The Superdome X server supports two types of partitioning that were carry overs from the original Superdome (Itanium) servers. vPARs and nPARs are both considered Hard Partitioning and thus, both allow the system to be configured into smaller groups of resources.  This allows only those cores to be licensed, then adhering to Oracle licensing rules.
    • HPE provides the Pella case study which states they have a 40 core partition separated from other cores on the server using nPAR technology that appears like server although made up of 2 blades. nPAR’s separate resources along “cell board” boundaries, which are the equivalent of an entire 2 socket blade.  Pella’s Primary Oracle environment runs with 2 blades, each with 2 x 1o cores totaling 40 cores. These two production blades with 40 cores & 20 Oracle licenses sit alongside 2 other blades in one data center while at the failover site sits another HP SD-X chassis. I wonder if Pella realizes the inefficiency of the Superdome X solution. Every Intel server has a compromise. Traditional Scale-out 1 & 2 socket servers have compromises with scalability, performance & RAS. Traditional Scale-up 4 socket & larger Intel servers have compromises with scalability, performance and RAS as well.  Each Superdome X blade has a Xbar controller plus the SX3000 Fabric bus. For this 4 socket NUMA server to act like one server, it will require 8 hops for every off-blade remote memory access.  Further, if the 2nd blade isn’t in the same slot number scheme, such as blade 1 in slot 1 and blade 2 in slot 3, then performance would be further degraded. Do you see what I mean by Intel servers having land mines with every step?
    • The Pella case study says the failover database server uses a single blade consisting of 30 cores.  Not sure how they are doing that if they are using E7_v3 or E7_v4 processors as there is not a 15 core option.  There is a E7_v2 (Ivy Bridge) 15 core option but doubt they would use it.  This single Oracle DB failover blade sits with additional 2 blades.  The fewest Oracle licenses you could pay for on the combined 4 socket or 40 core blade, assuming it is using 2 x 10 core chips per blade is 20 Oracle Licenses.  So, even if the workload ONLY requires 8, 12, 16 or 18 cores the customer would still pay for 20 Licenses.
    • This so-called $200,000 in Oracle licensing savings really is nothing, it really isn’t.  I just showed a customer running Oracle EBS with Linux on Dell how they would save $2.5M per year in Oracle maintenance cost if they moved the workload to AIX on POWER8.  If they would have deployed the solution on AIX to begin with, factoring in the 5 year TCO difference for what they are paying with Intel compared to POWER, this customer would have avoided spending $21M – let that sink in.
    • I do not intend to be disrespectful to Pella, but if you would have put the Oracle workloads running on the older HP SuperDome  onto POWER8 in 2015, you would not have bought a single Oracle license. I could guaranteed that you would have given licenses back, if desired. Not only would you avoid buying licenses, after returning licenses, you would save the 22% maintenance for each returned license.
    • Look at one of my old blogs where I give some Oracle Licensing examples comparing licensing costs for Intel vs Power. It is typical for what I regularly see with clients, if not even greater consolidation ratio’s and subsequent license reductions results.
    • The Pella case study does not mention if the new Superdome X solution uses any virtualization technology.  I can only assume it does not since it was not mentioned.  With IBM Power servers running AIX, all POWER servers come with virtualization (note I said “running AIX”).  With Power, the customer could add/remove cores & memory. They could add & remove I/O to any LPAR (LPAR = VM) while doing concurrent maintenance on the I/O path out-of-band via dual VIOS, move that VM from one server to another live…maybe it is only used when upgrading to the next generation of server …. you know, POWER9; the next generation that would deliver to Pella even more performance per core, allowing them to return more Oracle licenses, saving even more money.
  • This comes back to the “Granddaddy” statement Jeff made. Power servers have a license factor of 1.0 but with POWER server technology, customers ONLY license the cores used by Oracle. You can create dedicated VM’s where you only license those cores regardless of how many are in the server. Another option is to create a shared processor pool (SPP) and without going into all of the permutations, let’s simply say you ONLY license the cores used in the pool not to exceed the # of cores in the SPP.  However, what is different from the dedicated VM is that within the SPP, there could be 1 – N VM’s sharing those cores and thus sharing the Oracle licenses.
  • I did some analysis that I also use with my SAP HANA on POWER discussion where I show processor performance by core has increased each generation starting with POWER 5 all the way to POWER8. With POWER9 details just discussed at Hot Chips 2016 earlier this month (August), we can now expect it to deliver a healthy increase over POWER8 as well.  Thus, P5 to P5+ saw an increase in per core performance. P5+ to P6 to P6+ to P7 to P7+ to P8 all saw successive increases in per core performance. Contrast that to Intel and reference a recent blog I wrote titled “Intel, the Great Charade”.  Look at the first generation Nehalem called Gainestown which delivered a per core performance rating (as provided by Intel ) of .29. The next release was Westmere with a rating of .33. After that was Sandy Bride at .32 followed by Ivy Bridge at .31 then Haswell at .29 and the latest per core performance rating of .29.  What does this mean? In 2016, the per core performance is the same as it was for a processor in the 2007 timeframe. Yes, they have more cores per socket – but I’ll ask you; how are you paying for your Oracle, by core or by socket?
  • Next, IBM Power servers that run AIX, which is what would primarily run Oracle Enterprise Edition Database, run on servers with PowerVM which is the software suite that exploits Power Hypervisor. This is highly efficient and reliable firmware.  Part of this efficiency is how it shares and thus dispatches unused processor cycles between VM’s not to mention the availability of 8 hardware threads per core, clock speeds up to 4.5 GHz, At least 2X greater L1 & L2 caches. 3.5X greater L3 cache and 100% more L4 cache over Intel.  What does this mean? What it means is that Power does more than just beat Intel by 2X.  That is what I call a “foot race”.   When you factor in the virtualization efficiency you start to get processing efficiencies approaching 5:1, 10:1, even higher.
  • I like to tell the story of a customer I had running Oracle EBS across two sites. It had additional Oracle products: RAC and WebLogic but this example will focus just on Location 1 and on Oracle Enterprise Edition Database.  Customer was evaluating a Cisco UCS that was part of a vBlock, an Oracle Exadata and a IBM POWER7+ server. I’ll exclude Exadata, again because of some complexities it has with licensing where it skews the results against other Intel servers, just know the POWER7+ solution kicked its ass.  With the vBlock, a VCE engineer sized the server & core requirements for the 2 site solution.  Looking just at Location 1 for Oracle Enterprise Edition DB, the VCE engineer determined 300 Intel cores were required for Oracle EE DB.  All of these workloads required varying degrees of cores; 7 cores in one server rounded up to 10.  Another server required 4 cores that was rounded up to 6 or maybe 8 cores. Repeat this for dozens of workloads.  Just to reiterate that VCE did the sizing as I did the POWER7+ sizing independent from VCE, completing mine first for that matter.  My sizing demonstrated only 30 x POWER7+ cores.  That was 300 Intel cores or 150 Oracle Licenses compared to 30 x POWER cores or 30 Oracle Licenses.  If my memory serves me correctly, the hard core requirement for the Intel workload on the vBlock was around 168 cores which still would have been 84 Oracle Licenses.  This customer was receiving a 75% discount from Oracle and even with this the difference in licensing cost (Oracle EE DB, RAC & WebLogic for 2 sites) was somewhere around $10-12M.  Factor in the 22% annual maintenance and the 5 year TCO for the Intel solution ONLY for the Oracle software was around $20M vs $5-6M on POWER.  By the way, the hardware solution cost differences were relatively close in price; within several $100K.

I know we are discussing Oracle on Intel but wanted to share this SAP 2-tier S&D performance comparison between 4, 8 and 16 socket Intel servers’ vs 2 & 8 socket POWER8 servers.  I use this benchmark as I find it is one of the more reliable system wide test.  Many of the benchmarks are focused in specific areas such as floating point or integer but not transactional data involving compute, memory movement and active I/O.

SAP2tier_compare

Note in the results the 4 socket Haswell server outperform the newer Broadwell 4 socket server. Next, notice the 8 socket Haswell server outperform the newer 8 socket Broadwell 8 socket server. Lastly, notice the 2 x 16 core results, both which are on a HP Superdome X server.  Using the SAP benchmark measurement of SAPS, it shows the lowest amount of SAPS per core compared to any of the Intel servers shown. Actually, do you notice another pattern? The 4 sockets show greater efficiency over the 8 socket servers which show greater efficiency over the 16 socket servers.

Contrast that to the 2 socket POWER8 server, which by the way is 2X the best Intel result.  If the trend we just reviewed with the Intel servers above holds true, we would expect the 8 socket POWER8 result to show fewer SAPS per core than the 2 socket POWER8 server. We already know the answer because it was highlighted in green as it was the highest result that was roughly 13% greater than the 2 socket POWER8.   The 8 socket POWER8 was also 2.X+ greater than any of the Intel servers and 2.8X greater than the 16 socket HP Superdome X servers specifically.

Here comes my close – let’s see if I do a better job than Jeff!

  • My last point is this in response to Jeff’s statement that “There’s a compelling alternative. A “scale-up” (high capacity) x86 architecture with a large memory capacity for in-memory compute models can dramatically improve database performance and lower TCO.”
    • I’ve already debunked the myth and simply false statements that running Oracle on POWER costs more than Intel. In fact, it is just the opposite, and by a significant amount.
    • Also, as shown in the HPE whitepaper “How memory RAS technologies can enhance the uptime of HPE ProLiant servers” they state “It might surprise you to know that memory device failures are far and away the most frequent type of failure for scale-up servers.”. It is amazing how HPE talks out of both sides of their mouth.  Memory fails the most of any component in HPE servers yet they suggest you to buy these large scale-up servers that hold more memory, host and run more workloads such as “in-memory”  from  SAP HANA, Oracle 12c In-Memory or DB2 with BLU Acceleration.  While in their own publishing’s they acknowledge it is the part most likely to fail in their solution.
    • UPDATE: There is a better alternative to HPE Superdome X, Scale-up, Scale-out or any other Intel based server.  That alternative has higher processor performance, larger memory bandwidth, a (much) more reliable memory subsystem as well as overall system RAS capabilities with a full suite of virtualization abilities. That alternative is an IBM Server, specifically POWER8 available in open source 1 & 2 socket configurations (look at LC models), scale-out 1 & 2 models & here (look a L models) and scale-up 4 – 16 socket Enterprise models.  I’ll discuss more about HPE & IBM’s memory features in my next blog.

Your Honor, members of the jury, these are the facts as presented to you.  I leave it to you  to come back with the correct decision – Jeff Kyle and HPE are guilty of misleading customers and propagating untruths about IBM POWER.

Case closed!

 

HPE; there you go again! Part 1

Updated Sept 05, 2016: Split the blog into 2 parts (Part 2). Fixed several typo’s and sentence structure problems. Updated the description of the Superdome X blades to indicate they are 2 socket blades while using Intel E7 chips.

It must be the season as I find myself focused a bit on HPE.  Maybe it’s because they seem to be looking for their identity as they now consider selling their software business.  This time though, it is self-inflicted as there has been a series of conflicting marketing actions. From what they say in their recent HPE RAS whitepaper about the poor Intel server memory reliability stating in the introductory section that memory is far and away the highest source of component failures in a system.  Shortly after that RAS paper is released, they post a blog written by the HPE Server Memory Product Manager stating “Memory Errors aren’t the end of the World”.  Tell that to the SAP HANA and Oracle Database customers, the latter which I will be discussing in this blog.

HPE dares to step into the lion’s den on a topic with which it has little standing to imply it is an authority how Oracle Enterprise software products are licensing in IBM Power servers.  As a matter of fact, thanks to the President of VCE, Chad Sakac for acknowledging that VMware has a Oracle problem.  On August 17th, Chad penned what amounts to an open letter to Larry & Oracle begging them …. No, demanding that Larry leave his people alone.  And, by “his people”, I mean customers who run Oracle Enterprise Software Products licensed by the core on Intel servers using VMware.

Enter HPE with a recent blog by Jeff Kyle, Director of Mission Critical Solutions.  He doesn’t distinguish if he is in a product development, marketing or sales role.  I would bet he it is the latter two as I do not think a product developer would put themselves out like Jeff just did.  What he did is what all Intel marketing teams and sellers have done from the beginning of compute time when the first customer thought of running Oracle on a server that wasn’t “Big Iron”.

Jeff sets up a straw man stating “software licensing and support being one of the top cost items in any data center” followed by the obligatory claim that moving it to an “advanced” yet “industry-standard x86 servers” will deliver the ROI to achieve the goals of every customer while coming damn close to solving world hunger.

Next is where he enters the world of FUD while also stepping into the land of make-believe.  Yes, Jeff is talking about IBM Power technology as if it is treated by Oracle for licensing purposes the same as an Intel server, which it is not.  You will have to judge if he did this on purpose or simply out of ignorance.  He does throw the UNIX platforms a bone by saying they have “excellent stability and performance” but stops there as only to claim they cost more than their Industry standard x86 server counterparts.

He goes on to state UNIX servers <Hold Please> Attention: For purposes of this discussion, let’s go with the definition that future UNIX references = AIX and RISC references = IBM POWER unless otherwise stated.  As I was saying, Jeff next claims AIX & POWER are not well positioned for forward-looking Cloud deployments continuing his diminutive descriptors suggesting proper clients wouldn’t want to work with “proprietary RISC chips like IBM Power”. But, the granddaddy of all of his statements and the one that is complete disingenuous is:  <low monotone voice> “The Oracle license charge per CPU core for IBM Power is twice (2X) the amount charged for Intel x86 servers” </low monotone voice>.

In his next paragraph, he uses some sleight of hand by altering the presentation of the traditional full List Price cost for Oracle RAC that is associated with Oracle Enterprise Edition Database.  Oracle EE DB is $47,500 per license + 22% maintenance per year, starting with year 1.  Oracle RAC for Oracle EE EB is $23,000 per license + 22% maintenance per year, starting with year 1.  If you have Oracle RAC then you would by definition also have a corresponding Oracle EE DB Licenses.  The author uses a price of $11,500 per x86 CPU core and although by doing he isn’t wrong per se, I just do not like that he does not disclose the full license cost of #23,000 up front as it looks like he is trying to minimize the cost of Oracle on x86.

A quick licensing review. Oracle has an Oracle License Factor Table for different platforms to determine how to license its products that are licensed by core. Most modern Intel servers are 0.5 per License.  IBM Power is 1.0 per License.  HP Itanium 95XX chip based servers, so you know also has a license factor of 1.0.  Oracle, since they own the table and the software in question can manipulate it to favor their own platforms as they do, especially with the SPARC servers.  It ranges from 0.25 to 0.75 while Oracle’s Intel servers are consistent with the other Intel servers at 0.5.  Let’s exclude the Oracle Intel servers for purposes of what I am talking about here for reason I said, which is they manipulate the situation to favor themselves. All other Intel servers “MUST” license ALL cores in the server with very, very limited exceptions “times” the licensing factor which is 0.5.  Thus, a 2 x 18 core socket would have 36 cores. Ex: 2s x 18c = 36c x 0.5 License Factor = 18 Licenses.  That would equal 18 Oracle Licenses for whatever the product being used.

What Jeff does next was a bit surprising to me.  He suggests customers not bother with 1 & 2 socket Intel “Scale-out” servers which generally rely on Intel E5 aka EP chipsets.  By the way, Oracle with their Exadata & Oracle Database Appliances now ONLY use 2 socket servers with the E5 processors; let that sink in as to why.  The EP chips tend to have features that on paper have less performance such as less memory bandwidth & fewer cores while other features such as clock frequency are higher, a feature that is good for Oracle DB.   These chips also have lower RAS capabilities, such as missing the MCA (Machine Check Architecture) feature only found in the E7 chips.  He instead suggests clients look at “scale-up” servers which commonly classified as 4 sockets and larger systems.  This is where I need to clarify a few things.  The HP Superdome X system, although it scales to 16 sockets, does so using 2 socket blades.  Each socket uses the Intel E7 processor, which given this is a 2 socket blade is counter to what I described at the beginning of this paragraph where 1 & 2 socket servers used E5 processors.  The design of the HP SD-X is meant to scale from 1 blade to 8 blades or 2 to 16 sockets which requires the E7 processor.

With the latest Intel Broadwell EX or E7 chipsets, the number of cores available for the HD SD-X range from 4 to 24 cores per socket.  Configuring a blades with the 24 core E7_v4 (v4 indicates Broadwell) equals 48 cores or 24 Oracle Licenses.  Reference the discussion two paragraphs above.  His assertion is by moving to a larger server you get a larger memory capacity for those “in-memory compute models” and it is this combination that will dramatically improve your database performance while lowering your overall Total Cost of Ownership (TCO).

He uses a customer success story for Pella (windows) who avoided $200,000 in Oracle licensing fees after moving off a UNIX (not AIX in this case) platform to 2 x HPE Superdome X servers running Linux.  This HPE customer case study says the UNIX platform which Pella moved off 9 years ago was actually a HP Superdome with Intel Itanium processors server running HP-UX.  Did you get this? HP migrated off their own 9-year-old server while implying it might be from a competitor – maybe even AIX on Power since it was referenced earlier in the story.  That circa 2006 era Itanium may have used a Montecito class processor. All of the early models before Tukwila were pigs, in my estimation.  A lot of bluff and hyperbole but rarely delivering on the claims.  That era of SD would have also used an Oracle license factor of 0.5 as Oracle didn’t change it until 2010 and only on the newer 95xx series chips.  Older systems were grandfathered and as I recall as long as they didn’t add new licenses they would remain under the 0.5 license model.  I would expect a 2014/2015 era Intel processor would outperform a 2006 era chip, although if it would have been against a POWER5 1.9 or 2.2 GHz chip I might call it 50-50 J .

We have to spend some time discussing HP server technology as Jeff is doing some major league sleight of hand as the Superdome X server supports a special hardware partitioning capability (more details below) that DOES allow for reduced licensing that IS NOT available on non-Superdome x86 servers or from most other Intel vendors unless they also have an 8 socket or larger system like SGI – oh wait, HP just bought them.  Huh, wonder why they did this if the HPE Superdome X is so good.

Jeff then mentions an IDC research study; big deal, here is a note from my Pastor that says the HPE Superdome is not very good; who are you going to believe?

Moving the rest of the blog to Part 2.