Get more for less with POWER9

Who doesn’t expect more from a new product, let alone if it is the next generation of that product. Whether it is the “All New 2019 Brand Model” Car/Truck/SUV or, being a Macbook fan, the latest Macbook Pro and IOS (just keep the magnetic power cord)?

We want and expect more.  IBM POWER8 delivered more.  More performance, built-in virtualization on the Enterprise systems, mobile capacity on Enterprise systems to share capacity between like servers, a more robust reliability and availability subsystem as well as improved serviceability features from the low-end to high-end.  Yes, all while dramatically improving performance over previous generations.

How do you improve upon something that is already really good – I’m purposefully avoiding using the word “great” as it’ll make me sound like a sycophant who would accept a rock with a Power badge and call it “great”.  No, I am talking about actual, verifiable features and capabilities delivering real value to businesses.

Since the POWER9 Enterprise systems have yet to be announced and I only know what I know through my secret sources, I’ll limit my statements to just the currently available POWER9 Scale-out systems.

  • POWER8 Scale-out now include PowerVM Enterprise Edition licenses
  • Workload Optimized Frequency now delivers frequencies up to 20% higher over the nominal or marketed clock frequency
  • PCIe4 slots to support higher speed and bandwidth adapters
  • From 2 to 4X greater memory capacity on most systems
  • New “bootable” internal NVMe support
  • Enhanced vTPM for improved Secure Boot & Trusted Remote Attestation
  • SR-IOV improvements
  • CAPI 2.0 and OpenCAPI capability – the latter, though I’m unaware of any supported features is exciting in what it is designed and capable of doing.
  • Improved price points using IS memory

The servers also shed some legacy features that were getting long in the tooth.

  • Internal DVD players – in lieu of USB drive support
  • S924 with 18 drive backplane no longer includes add-on 8 x 1.8″ SSD slots

As consumers, we expect more from our next generation purchases, the same holds true with POWER9.  Get more capability, features and performance for less money.

Contact me if you would like a quote to upgrade to POWER9, running x86 workloads and would like to hear how you may be able to do far more with less as well as learn how my services team will ease any concerns or burdens you may have to remain on your aging and likely, higher cost servers by upgrading to POWER9.



Upgrade to POWER9 – Never been easier!

Delivering more features & performance at a lower cost, the ease and options available to upgrade have never been more compelling.

With an outstanding family of products in IBM’s POWER8 portfolio, it seemed impossible for IBM to deliver a successor with more features, increased performance, greater value, while at a lower price point.  On February 13th, IBM announced the POWER9 Scale-out products supporting AIX, IBM i and Linux while 1st POWER9 announcement occurred December 5, 2017 with the AC922, a HPC & AI beast.

These newly announced PowerVM-based systems consist of 1 & 2 sockets systems supporting up to 4 TB of DDR4 memory.  Starting with the robust 1-socket S914 then accelerating to the 2RU 2-socket S922 and the 4RU 2-socket S924 system. IBM announced sister systems to the S-models purpose-built for SAP HANA.  These systems are the H822 & H824 systems, identical to the S822 & S824. The H-models might also be considered hybrid systems as they come bundled with key software used with HANA while allowing a smaller AIX and IBM i footprint – sort of a hybrid between a S & L model system.  There is also a Linux only model, just as there was with POWER8.  Called the L922, it is a 2-socket though available in a 1-socket configuration.  Each of these systems support up to 4 TB of memory except the S914 which supports up to 1 TB.

Why should businesses consider upgrading to POWER9? If they are running on POWER7 and older systems, Clients will save significant cost by lowering hardware and software maintenance cost.  Moreover, with the increased performance, clients will be able to consolidate more VM’s than ever and reduce enterprise software product licensing as well as its exorbinant maintenance cost.

While Intel cancels Knights Landing and struggles to deliver innovation and performance on their 10nm and 7nm platforms, remaining in a perpetual state of treading water at 14nm, what they are delivering seems to most benefit ISV’s and not businesses.

The traditional workloads such as Oracle, DB2, Websphere, SAP (ECC & HANA), Oracle EBS, Peoplesoft, JD Edwards, Infor, EPIC and more all benefit.  For businesses looking to develop and deploy technologies developed in the 21st Century, these purpose built products deliver new innovations ideally suited for workloads geared toward Cognitive (analytics) and the web. NoSQL products, such as Redis Labs, Cassandra, neo4j or Scylla to open source relational databases products like PostgreSQL or MariaDB.

With the increased performance and higher efficiencies, all software boats will rise running on POWER9.

My team of Architects and Engineers at Ciber Global are prepared to help migrate workloads from your POWER5, POWER6, POWER7 and even POWER8 systems running AIX 5.3, 6.1, 7.1 and 7.1 as well as IBM i v6.1, 7.1, 7.2 and 7.3 to POWER9.

POWER9 supports AIX 6.1, 7.1 and 7.2.  For IBM i, it supports 7.2 & 7.3.  Client systems not at these levels will have our consultants available to guide them on the requirements and their upgrade options.  Whether using Live Partition Mobility, aka the Easy Button to move workloads from POWER6, POWER7 or POWER8 systems to POWER9 or using more traditional methods such as AIX NIM or IBM i Full System Save/Restore, there is likely an approach meeting the businesses needs.

Rest assured, if you have doubts or concerns reach out to my team at Ciber to discuss. And if you don’t already have the Easy Button, IBM is offering a 60-day trial key for clients to upgrade the PowerVM Standard Edition licenses to Enterprise Edition on their P6, P7 or P8 systems making the upgrade to POWER9 not only financially easy but also technically easy.


HPE Memory RAS; Excels at being Average

A recent HPE blog stating memory errors are not the end of the world was meant to reassure clients to accept regular & unplanned platform disruptions. In reality what HPE ends up saying is there is little difference with the other commercial Intel server vendors and their own as they all range from below average to average at best.  Just so happens, this specific blog was written by the HPE Server Memory Product Manager who might be forgiven for painting this dire picture only to then present the best alternative; Yes, HPE SmartMemory. *shock*

To HPE’s credit, they have quite a bit of documentation discussing server Reliability, Availability & Serviceability (RAS) features, specifically about their memory subsystem. They are fairly forthright about their strengths and weaknesses of the entry, mid-range and high-end servers. Sadly though, at every level there message is full of qualifiers, limitations and restrictions which require the consumer to wade through and understand all of the requirements.

An HPE whitepaper from February 2016 titled “How memory RAS technologies can enhance the uptime of HPE ProLiant servers” paints a starkly different picture than the blog. The whitepaper states on page 2 in the 2nd paragraph of the introductory summary section “It might surprise you to know that memory device failures are far and away the most frequent type of failure for scale-up servers.“, up to 2X the rate of the next closest part when the memory is configured with a memory protection configuration not better than SDDC+1.  There is another graph that immediately follows this one showing when memory is configured using a protection scheme of DDDC+1 it decreases memory failures by 85%. That is pretty good, yet the value of 85% used in the whitepaper does not jive with the blog which states when using HPE SmartMemory, memory errors are reduced 99.9998% (yes, that is 5 x 9’s).  I call out this discrepancy because right after claiming 5×9’s they point the reader to the very whitepaper I am citing here.

This blog is not meant to define all of the different terms used, you will have to do some of that work. However, it is worth noting that all of the wonderful features touted in the HPE blog, in the HPE whitepaper and may other sources, the consumer will find there are many qualifiers, limitations and restrictions.  Such as.

  1. E5 chips do not support DDDC or DDDC+1
  2. E5 chips only support SDDC or SDDC + rank sparing
  3. Memory sparing consumes (wastes) either 25% or 12.5% of installed capacity
  4. EX chips support SDDC, SDDC + rank sparing, SDDC+1 and DDDC+1
  5. But, DDDC+1 is ONLY using x4 DIMMs and not x8 DIMMs
  6. DDDC+1 requires x4 DIMMs
  7. Advanced ECC is an option used across 2 DIMMs but can only fill 2 of 3 DIMM slots per channel
  8. Memory Mirroring is the most expensive in terms of cost & performance
  9. Memory Mirroring wastes 1/2 of the DIMM slots for the mirror – not usable
  10. Memory Mirroring only allows you to fill 2 of 3 DIMM slots per channel
  11. Memory Mirroring has a potential performance impact for WRITES

Let’s be clear, consumers have 3 primary options to configure memory on any of the Intel servers.

  1. Performance mode which delivers the highest bandwidth with the lowest reliability features. Not an ideal option for in-memory workloads despite the appeal to maximize the bandwidth.
  2. Lockstep Mode meant to strike a balance of slightly decreased bandwidth (can be up to 50%¹) while increasing reliability over performance mode.  Probably the most common option selected.
  3. Memory Mirroring Mode delivers the highest reliability at the expense of wasting 1/2 the memory capacity as well has a slight performance decrease (remember, this mode can only use 2 of the 3 DIMM slots per channel so you already lose 1/3 of the memory capacity).

What is HPE’s response to clients who want increased memory RAS; especially for those in-memory workloads such as SAP HANA?  Buy more expensive E7 based servers to receive slightly higher memory RAS capability OR install more memory on the already RAS-deficient E5 based servers to increase its capacity to utilize memory spare ranks.

Net-net is that HPE is pushing proprietary memory that is far more expensive than the industry standard memory traditionally used with Intel servers that has earned it the reputation as a low-cost leader relative to traditional Enterprise-class systems like IBM POWER or SPARC. That is evident in the SAP HANA space as the systems required to support these in-memory workloads tend to require more capacity; more cores to achieve the core to memory ratio’s and more sockets to achieve more memory capacity with its associated bandwidth.  Yet, HPE remains true to form as regardless of the path taken, it comes with increased cost, limitations, restrictions and qualifications.

Contrast the never-ending “Compromise” Intel options, IBM’s POWER8 servers use Enterprise memory that is “No Compromise”.  This buffered memory offers spare  capacity, spare lanes, memory instruction replay, chipkill and an incredible DDDC +1+1 allowing for multiple DRAM failures before experiencing a system event.  The design point for POWER8 memory is simple: Not to fail!

AS you consider platforms to host in-memory workloads such as SAP HANA, DB2 BLU, consider which basket you want to place all of your eggs into.  A platform with a memory subsystem designed not to fail or a platform with unending limitations as listed above. The choice should be easy – Choose POWER!


SAP HANA – could I have extra complexity please?

Just returned from IBM’s Systems Technical University conference held in Orlando having delivered presentations on 4 different topics.

  1. Benefits of SAP HANA on POWER vs Intel
  2. Why IBM POWER systems are datacenter leaders
  3. Only platform that controls Software Licensing
  4. Why DB2 beats Oracle on POWER (implied that it beats Intel).

With the SAP Sapphire conference last week in Orlando, there was a slew of announcements.  Quick reminder for the uninitiated with SAP HANA, that it is ONLY supported on Intel and POWER based systems running one OS; SUSE or RedHat Linux. With that, IBM POWER continues to deliver the best value.

What is the value offered with the POWER stack? Flexibility! It really is that simple.  If I had a mic on the plane as I write this, I would drop it. Conversely, what is the value offered going with an Intel stack? Compromise!

Some of the flexibility offered thru IBM POWER systems are: Scale-up, scale-out, complete virtualization, grow, shrink, move, perform concurrent maintenance, mix workloads: existing ECC workloads on AIX or IBM i with new HANA running Linux all on the same server.  All of this runs using the most resilient HANA platform available.

Why do I label Intel systems as “Compromise” solutions? It isn’t a competitive shot nor FUD.  Listen, as an Client Executive and Executive Architect for an Channel Reseller, I am able to offer my clients solutions from multiple vendors that include IBM POWER and Intel based systems manufacturers.  I’ve made the conscious decision though to promote IBM POWER over Intel.  Why? Because I not only believe in the capabilities of the platform but also having worked with some of the largest companies in the world, I regularly hear and see the impact running Enterprise workloads on Intel based servers has on the business.

If you read my previous blog, I mention a client who just recently moved their Oracle workloads from POWER to Intel.  Within months, they’ve had to buy over $5M in new licenses going from a simple standalone and a few 2-node clusters (all on the same servers) to an 8-node VMware based Oracle RAC cluster.  This environment is having daily stability issues significantly impacting their business.  Yes, their decision to standardize on a single platform has introduced complexity to the business costing them money, resources (exhausted & not having the proper skills to manage the complexity) that impacts their end-users.

The “Compromise” I mention to host SAP HANA on Intel is that everything has to be an asterisk by it – in other words a limitation or restriction – everything requires follow-up questions and research to ensure what the business wants to do, can be done. Here are some examples.
1) VMware vSphere 5.5 initially supported 1 VM per system which has now been increased to 4 VM’s, but with many qualifications.
a) Restricted to 2 & 4 socket Intel servers
1) VM’s are limited to a socket
2) 2 socket server ONLY supports 2 VM’s, 4 socket would be 4 x 1 sockets each
b) Only E5_v2, E5_v3, E7_v2 and E7_v3 chips are supported – NO Broadwell
c) Want to redeploy capacity for other? Appliances certified only for SoH or S4H
uses cannot be used for other purposes such as BW
d) Did I mention, those VM’s are also limited to 64 vCPU and 1 TB of memory each
e) If a VM needs more memory than what is attached to that socket? No problem, you have to add an additional socket and all of its memory – no sharing!
2) VMware vSphere 6.0 just recently went from 1 to 16 VM’s per system.
a) VM’s are still limited to a socket or 1/2 socket.
b) 1/2 socket isn’t as amazing as it sounds.  Since vSphere supports 2, 4 & 8 socket servers, there can be 16 x 1/2 socket VM’s.
c) What there cannot be, is any combination of VM’s >1 socket with 1/2 socket assigned. In other words, a VM cannot have 1.5 or 3.5 sockets. Any VM resource requirement above 1 socket requires the addition of an entire socket.  1.5 sockets would be 2 sockets.
d) Multi-node setups are NOT permitted …. at all!
e) VM’s larger than 2 sockets cannot use Ivy Bridge based systems, only Haswell or Broadwell chips – but ONLY on 4-socket servers.  Oh my gosh, this is making my head hurt!
f) If using an 8-socket system, it only supports a single production VM using Haswell ONLY processors.  NOT Ivy Bridge and NOT Broadwell!
g) VM’s are limited to 128 vCPU and 4 TB of memory
3) VMware vSphere 6.5 with SAP HANA SPS 12 only supports Intel Broadwell based systems. What if your HANA Appliance is based on Ivy-Bridge or Haswell processor technology? “Where is that Intel rep’s business card? Guess I’ll have to buy another one since I can’t upgrade these”
a) VM’s using >4 sockets are currently NOT supported with these Broadwell chips
b) Now, it gets better. I hope you are writing this down – For 2 OR 8 socket systems, the maximum VM size is 2 sockets.  Only a 4 socket system supports 1 VM with 4 sockets.
c) Same 1/2 socket restrictions as vSphere 6.0.
d) Servers with >8 sockets do NOT permit the use of VMware
e) If your VM requirements exceed 128 vCPU and 4 TB of memory, you must move it to a bare-metal system ….. Call me – I’ll put you on a POWER system where you can scale-up, scale-out without of this mess

Contrast all of these VMware + Intel limitations, restrictions, liabilities, qualification or simply said “Compromise” systems to the IBM Power System.

POWER8 servers run the POWER Hypervisor called PowerVM.  This Hypervisor and its suite of features deliver flexibility allowing all physical, all virtual and a combination of physical & virtual resource usage on each system. Even where there are VM limits such as 4 on the low-end system, that 4 could really be 423 VM’s.  I’m making a theoretical statement here to prove the point. Let’s use a 2 socket 24 core S824 server.  3 VM’s, each with 1 core (yes, I said core) for production usage and the 4th VM’s is really a Shared Processor Pool with 21 cores.  Those 21 cores support up to 20 VM’s per core or 420 VM’s. Any non-production use is permitted.

Each PowerVM VM supports up to 16 TB of memory and 144 cores.  VM size above 108 cores requires the use of SMT4 whereas <=108 cores permit SMT8.  Thus, 144 cores with SMT4 is 576 vCPU’s or 4.5X what Intel can do with 4X the memory footprint.  By the way, that 108 core VM would support 864 vCPU’s – just saying!  Note: I need to verify as the largest SMT8 VM may be 96 cores with only 768 vCPU.

Not only can we allocate physical cores to VM’s and NOT limited to 1/2 or full socket increments like Intel, but POWER systems granularity allows for adjustments at the vCPU level.

PowerVM supports scale-out and scale-up.  Then again, if you have heard or read about the Pfizer story for scale-out BW, you might rethink a literal scale-out approach. Read IBM’s Alfred Freudenberger’s blog on this subject at

While on the subject of BWoH/B4H, PowerVM supports 6 TB per VM whereas the vSphere 6.0 supports is 3 TB and the limitations increase from here.

Do you see why I choose to promote IBM Power vs Intel? When I walk into a client, the most valuable item I bring with me is my credibility.  HANA on Intel is a constant train wreck with constant changes & gotcha’s. Clients currently with HANA on Intel solutions or better yet, running ECC on Intel have options.  That option is to move to a HANA 2.0 environment using SUSE 12 or RedHat v7 Linux on POWER servers. Each server will host multiple VM’s with greater resiliency providing the business the flexibility desired from the critical business system that likely touches every part of the business.

Does your IT shop use a combination wrench?

More and more, IT shops seem inclined to consolidate and simplify their infrastructure to one platform. A mindset that all workloads can or should run on a single platform incorporated into ‘Software-defined this’ and ‘Software-defined that’.  It tantalizes the decision makers senses as vendors claim to reduce complexity and cost.

Technology has become Ford vs Chevy or John Deere vs Case International.  Whereas these four vendors each have some unique capabilities and offerings they are also leaders in innovation and reliability.  For IT shops, there is this perception that only Intel & VMware are viable infrastructure options to deploy every workload type.  Mission / Life critical workloads in healthcare, high-frequency financial transactions, HPC, Big Data, Analytics, emerging Cognitive & AI but also traditional ERP workloads that run entire businesses – SAP ECC, SAP HANA and Oracle EBS are probably the most common that I see as there are also some industry specific ones for Industrial and automotive companies – I’m thinking of Infor.

When a new project comes up, there is little thought given to the platform. either the business or maybe the ISV will state what and how many of server X should be ordered. The parts arrive, eventually getting deployed.  Little consideration is given to the total cost of ownership or the the impact to the business caused by the system complexity.

I’ve watched a client move their Oracle workloads to IBM POWER several years ago. This allowed them to reduce their software licensing and annual maintenance cost as well as to redeploy licensing to other projects – cost avoidance by not having to add net new licensing.  As it happens in business, people moved on, out and up. New people came in whose answer to everything was Intel + VMware.  Yes, a combination wrench.

If any of you have used a combination wrench,  you know there are a few times it is the proper tool. However, it can also strip or round over the head of a bolt or nut if too much pressure or torque is applied. Sometimes the proper tool is a SAE or Metric box wrench, possible a socket, even an impact wrench.  In this clients case, they have started to move their Oracle workloads from POWER to Intel.  Workloads currently running on standalone servers or at most using 2-node PowerHA clusters.  Moving these simple (little complexity) Oracle VM’s to 6-node VMware Oracle RAC clusters that have now grown to 8-nodes.  Because we all know that Oracle RAC scales really well (please tell me you picked up on the sarcasm).

I heard from the business earlier this year that they had to buy over $5M of net-new Oracle licensing for this new environment. Because of this unforeseen expense, they are moving other commercial products to open-source since we all know that open-source is “free” to offset the Oracle cost.

Oh, I forgot to mention.  That 8-node VMWare Oracle RAC cluster is crashing virtually every day.  I guess they are putting too much pressure on the combination wrench!

Oracle is a mess & customers pay the price!

Chaos that is Oracle

Clients are rapidly adopting open source technologies in support of purpose-built applications while also shifting portions of on-premises workloads to major Cloud providers like Amazon’s AWS, Microsoft’s Azure and IBM’s SoftLayer.  These changes are sending Oracle’s licensing revenue into the tank forcing them to re-tool … I’m being kind saying it this way.

What do we see  Oracle doing these days?

  • Aggressively going after VMware environments who use Oracle Enterprise products for licensing infractions
  • Pushing each of their clients toward Oracle’s public cloud
  • Drastically changing how Oracle is licensed for Authorized Cloud Environments using Intel servers
  • Latest evidence indicates they are set to abandon Solaris and SPARC technology
  • On-going staff layoffs as they shift resources, priorities & funding from on-premises to cloud initiatives

VMware environments

I’ve previously discussed for running Oracle on Intel (vs IBM POWER), Intel & VMware have an Oracle problem. This was acknowledged by Chad Sakac, Dell EMC’s President Converged Division in his August 17, 2016 blog in what really amounted to an Open Letter to King Larry Ellison, himself. I doubt most businesses using Oracle with VMware & Intel servers fully understand the financial implications this has to their business.  Allow me to paraphrase the essence of the note “Larry, take your boot off the necks of our people”.

This is a very contentious topic so I’ll not take a position but will try to briefly explain both sides.  Oracle’s position is simple even though it is very complex.  Oracle does not recognize VMware as an approved partitioning (view it as soft partitioning) method to limit Oracle licensing. As such, clients running Oracle in a VMware environment, regardless of how little or much is used, must properly license it for every Intel server under that clients Enterprise (assume vSphere 6+).  They really do go beyond a rational argument IMHO. Since Oracle owns the software and authored the rules they use these subtleties to lean on clients extracting massive profits despite what the contract may say. An example that comes to mind is how Oracle suddenly changed licensing configurations for Oracle Standard Edition and Standard Edition One. They sunset both of these products as of December 31, 2015 replacing both with Standard Edition 2. What can only be described as screwing clients, they halved the number of sockets allowed on a server or in a RAC cluster, limited the number of cpu threads per DB instance while doubling the number of minimum Named User Plus (NUPs). On behalf of Larry, he apologizes to any 4 socket Oracle Standard Edition users but if you don’t convert to a 2 socket configuration (2 sockets for 1 server or 1 socket for 2 servers using RAC) then be prepared to license the server using the Oracle Enterprise Edition licensing model.

The Intel server vendors and VMware have a different interpretation on how Oracle should be licensed.  I’ll boil their position down to using host or cpu affinity rules.  House of Bricks published a paper that does a good job trying to defend Intel+VMware’s licensing position. In their effort, they do show how fragile of ground they sit on with its approach  highlighting the risks businesses take if they hitch their wagons to HoB, VMware & at least Dell’s recommenations.

This picture, which I believe House of Bricks gets the credit for creating captures the Oracle licensing model for Intel+VMware environments quite well. When you pull your car into a parking garage – you expect to pay for 1 spot yet Oracle says you must pay for every one as you could technically park in any of them. VMware asserts you should only pay for a single floor at most because your vehicle may not be a compact car, may not have the clearance for all levels, there are reserved & handicapped spots which you can’t use. You get the idea.


It simply a disaster for any business to run Oracle on Intel servers. Oracle wins if you do not virtualize, running each on standalone servers.  Oracle wins if you use VMware, regardless of how little or much you actually us.  Be prepared to pay or to litigate!

Oracle and the “Cloud”

This topic is more difficult to provide sources so I’ll just stick to anecdotal evidence. Take it or leave it. At contract renewal, adding products to contracts or new projects like migrating JD Edwards “World” to “Enterprise One” or a new Oracle EBS deployment would subject a business to an offer like this.  “Listen Bob, you can buy 1000 licenses of XYZ for $10M or you can buy 750 licenses of XYZ for $6M, buy 400 Cloud units for $3M and we will generously throw in 250 licenses …. you’ll still have to pay support of course. You won’t get a better deal Bob, act now!”.  Yes, Oracle is willing to take a hit for the on-premises license revenue while bolstering their cloud sales by simply shuffling the Titanic deck chairs. These clients, for the most part are not interested in the Oracle cloud and will never use it other than to get a better deal during negotiations. Oracle then reports to Wall Street they are having tremendous cloud growth. Just google “oracle cloud fake bookings” to read plenty of evidence to support this.

Licensing in the Cloud

Leave it to Oracle Marketing to find a way to get even deeper into clients wallets – congratulations they’ve found a new way in the “Cloud”.  Oracle charges at least 2X more with Oracle licenses on Intel servers that run in Authorized Cloud Environments (ACE). You do not license Oracle in the cloud using the on-premises licensing factor table.  The more VM’s running in a ACE,  the more you will pay vs an on-premises deployment. To properly license an on-premises Intel server (remember, it is always an underlying proof that Oracle on POWER servers is the best solution) regardless if virtualization is used, assuming a 40 core server, would equal 20 Oracle Licenses (Intel licensing factor for Intel servers is 0.5 per core). Assume 1 VMware server, ignoring it is probably part of a larger vSphere cluster.  Once licensed, clients using VMware could theorectially run Oracle as many VM’s as desired or supported by that server. Over-provision the hell out of it – doesn’t matter. That same workload in an ACE, you pay for what amounts to every core.  Remember, if the core resides on-premises it is 1 Oracle License for every 2 Intel cores but in a ACE it is 1 OL for 1 core.

Putting your Oracle workload in the cloud?  Oracle license rules stipulate if running in AWS, it labels as vCPU’s both the physical core and the hyperthread. Thus, 2 vCPU = 1 Oracle License (OL). Using the same 40 core Intel server mentioned above, with hyperthreading it would be 80 threads or 80 vCPU.  Using Oracle’s new Cloud licensing guidelines, that would be 40 OL.  If this same server was on-premises, those 40 physical cores (regardless of threads) would be 20 OL ….. do you see it?  The licensing is double!!!   If your AWS vCPU consumption is less vs the on-premises consumption you may be ok. As soon as your consumption goes above that point – well, break out your checkbook.  Let your imagination run wild thinking of the scenarios where you will pay for more licenses in the cloud vs on-prem.

Since Azure does not use hyperthreading, 1 vCPU = 1 core.  The licensing method for ACE’s for Azure or any other ACE if hyperthreading is not used, 1 vCPU = 1 OL.  If a workload requires 4 vCPU, it requires 4 OL vs the 2 OL if it was on-premises.

Three excellent references to review. The first is Oracle’s Cloud licensing document. The second link is an article by Silicon Angle giving their take of this change and the last link is for a blog by Tim Hall, a DBA and Oracle ACE Director sharing his concerns. Just search for this topic starting from January 2017 and read until you fall asleep.

Oracle offers their own cloud and as you might imagine, they do everything they can to favor their own cloud thru licensing, contract negotiations and other means.   From SaaS, IaaS and PaaS their marketing machine says they are second to none whether the competition is SalesForce, Workday, AWS, Azure or any other.  Of course, analysts, media, the internet nor Oracle earnings reports show they are having any meaningful success – to the degree they claim.

Most recently, Oracle gained attention for updating how clients can license Oracle products in ACE’s as mentioned above.  As you might imagine, Oracle licenses its products slightly differently than in competitors clouds but they still penalize Intel and even SPARC clients, who they’ll try to migrate into the cloud running Intel (since it appears Oracle is abandoning SPARC).  The Oracle Cloud offers clients access to its products on a hourly or monthly in a metered and non-metered format on up to 4 different levels of software. Focusing on Oracle DB, the general tiers are Standard, Enterprise, High-Performance and Extreme-Performance Packages. Think of it like Oracle Standard Edition, Enterprise Edition, EE+tools, EE+RAC+tools.  Oracle also defines the hardware tier as “Compute Shapes“. The three tiers are General Purpose, High-Memory or Dedicated compute

Comparing the cost of an on-premises perpetual license for Oracle Enterprise  vs a non-metered monthly license for the Enterprise Tier means they both use Oracle Enterprise Edition Database. Remember a perpetual license is a one-time purchase, $47,500 for EE DB list price plus 22% per year annual maintenance.  The Enterprise tier using a High-memory compute shape in the Oracle cloud is $2325 per month.  This compute shape consists of 1 OCPU (Oracle CPU) or 2 vCPU (2 threads / 1 core).  Yes, just like AWS and Azure, Intel licensing is at best 1.0 vs 0.5 for on-premises licensing per core. Depending how a server might be over-provisioned as well as the fact an on-premises server would be fully licensed with 1/2 of its installed cores there are a couple of ways clients will vastly overpay for Oracle products in any cloud.

The break-even point for a perpetual license + support vs a non-metered Enterprise using High-memory compute shape is 30 months.

  • Perpetual license
    • 1 x Oracle EE DB license = $47,500
    • 22% annual maintenance = $10,450
    • 3 year cost: $78,850
  • Oracle Cloud – non-metered Enterprise using High-Memory shape
    • 1 x OCPU for Enterprise Package for High-Compute = $2325/mo
    • 1 year cloud cost = $27,900
    • 36 month cost: $83,700
  • Cross-over point is at 30 months
    • $79,050 is the 30 month cost in the Cloud
  • An Oracle Cloud license becomes significantly more expensive after this.
    • year 4 for a perpetual license would be $10,470
    • 12 months in year 4 for the Cloud license would be $27,900
    • Annual cost increase for a single cloud license over the perpetual license = $17,430
  • Please make your checks payable to “Larry Ellison”

Oracle revenue’s continue to decline as clients move to purpose-built NoSQL solutions such as MongoDB, RedisLabs, Neo4j, OrientDB, Couchbase as well as SQL based solutions from MariaDB, PostgreSQL (I like EnterpriseDB) even DB2 is a far better value.  Oracle’s idea isn’t to re-tool by innovating, listening to clients to move with the market. No, they get out their big stick – follow the classic mistake so many great clients have done before them which is not evolve while pushing clients until something breaks.   Yes, Boot Hill is full of dead technology companies who failed to innovate and adapt. This is why Oracle is in complete chaos.  Clients beware – you are on their radar!



HPE; there you go again! Part 1

Updated Sept 05, 2016: Split the blog into 2 parts (Part 2). Fixed several typo’s and sentence structure problems. Updated the description of the Superdome X blades to indicate they are 2 socket blades while using Intel E7 chips.

It must be the season as I find myself focused a bit on HPE.  Maybe it’s because they seem to be looking for their identity as they now consider selling their software business.  This time though, it is self-inflicted as there has been a series of conflicting marketing actions. From what they say in their recent HPE RAS whitepaper about the poor Intel server memory reliability stating in the introductory section that memory is far and away the highest source of component failures in a system.  Shortly after that RAS paper is released, they post a blog written by the HPE Server Memory Product Manager stating “Memory Errors aren’t the end of the World”.  Tell that to the SAP HANA and Oracle Database customers, the latter which I will be discussing in this blog.

HPE dares to step into the lion’s den on a topic with which it has little standing to imply it is an authority how Oracle Enterprise software products are licensing in IBM Power servers.  As a matter of fact, thanks to the President of VCE, Chad Sakac for acknowledging that VMware has a Oracle problem.  On August 17th, Chad penned what amounts to an open letter to Larry & Oracle begging them …. No, demanding that Larry leave his people alone.  And, by “his people”, I mean customers who run Oracle Enterprise Software Products licensed by the core on Intel servers using VMware.

Enter HPE with a recent blog by Jeff Kyle, Director of Mission Critical Solutions.  He doesn’t distinguish if he is in a product development, marketing or sales role.  I would bet he it is the latter two as I do not think a product developer would put themselves out like Jeff just did.  What he did is what all Intel marketing teams and sellers have done from the beginning of compute time when the first customer thought of running Oracle on a server that wasn’t “Big Iron”.

Jeff sets up a straw man stating “software licensing and support being one of the top cost items in any data center” followed by the obligatory claim that moving it to an “advanced” yet “industry-standard x86 servers” will deliver the ROI to achieve the goals of every customer while coming damn close to solving world hunger.

Next is where he enters the world of FUD while also stepping into the land of make-believe.  Yes, Jeff is talking about IBM Power technology as if it is treated by Oracle for licensing purposes the same as an Intel server, which it is not.  You will have to judge if he did this on purpose or simply out of ignorance.  He does throw the UNIX platforms a bone by saying they have “excellent stability and performance” but stops there as only to claim they cost more than their Industry standard x86 server counterparts.

He goes on to state UNIX servers <Hold Please> Attention: For purposes of this discussion, let’s go with the definition that future UNIX references = AIX and RISC references = IBM POWER unless otherwise stated.  As I was saying, Jeff next claims AIX & POWER are not well positioned for forward-looking Cloud deployments continuing his diminutive descriptors suggesting proper clients wouldn’t want to work with “proprietary RISC chips like IBM Power”. But, the granddaddy of all of his statements and the one that is complete disingenuous is:  <low monotone voice> “The Oracle license charge per CPU core for IBM Power is twice (2X) the amount charged for Intel x86 servers” </low monotone voice>.

In his next paragraph, he uses some sleight of hand by altering the presentation of the traditional full List Price cost for Oracle RAC that is associated with Oracle Enterprise Edition Database.  Oracle EE DB is $47,500 per license + 22% maintenance per year, starting with year 1.  Oracle RAC for Oracle EE EB is $23,000 per license + 22% maintenance per year, starting with year 1.  If you have Oracle RAC then you would by definition also have a corresponding Oracle EE DB Licenses.  The author uses a price of $11,500 per x86 CPU core and although by doing he isn’t wrong per se, I just do not like that he does not disclose the full license cost of #23,000 up front as it looks like he is trying to minimize the cost of Oracle on x86.

A quick licensing review. Oracle has an Oracle License Factor Table for different platforms to determine how to license its products that are licensed by core. Most modern Intel servers are 0.5 per License.  IBM Power is 1.0 per License.  HP Itanium 95XX chip based servers, so you know also has a license factor of 1.0.  Oracle, since they own the table and the software in question can manipulate it to favor their own platforms as they do, especially with the SPARC servers.  It ranges from 0.25 to 0.75 while Oracle’s Intel servers are consistent with the other Intel servers at 0.5.  Let’s exclude the Oracle Intel servers for purposes of what I am talking about here for reason I said, which is they manipulate the situation to favor themselves. All other Intel servers “MUST” license ALL cores in the server with very, very limited exceptions “times” the licensing factor which is 0.5.  Thus, a 2 x 18 core socket would have 36 cores. Ex: 2s x 18c = 36c x 0.5 License Factor = 18 Licenses.  That would equal 18 Oracle Licenses for whatever the product being used.

What Jeff does next was a bit surprising to me.  He suggests customers not bother with 1 & 2 socket Intel “Scale-out” servers which generally rely on Intel E5 aka EP chipsets.  By the way, Oracle with their Exadata & Oracle Database Appliances now ONLY use 2 socket servers with the E5 processors; let that sink in as to why.  The EP chips tend to have features that on paper have less performance such as less memory bandwidth & fewer cores while other features such as clock frequency are higher, a feature that is good for Oracle DB.   These chips also have lower RAS capabilities, such as missing the MCA (Machine Check Architecture) feature only found in the E7 chips.  He instead suggests clients look at “scale-up” servers which commonly classified as 4 sockets and larger systems.  This is where I need to clarify a few things.  The HP Superdome X system, although it scales to 16 sockets, does so using 2 socket blades.  Each socket uses the Intel E7 processor, which given this is a 2 socket blade is counter to what I described at the beginning of this paragraph where 1 & 2 socket servers used E5 processors.  The design of the HP SD-X is meant to scale from 1 blade to 8 blades or 2 to 16 sockets which requires the E7 processor.

With the latest Intel Broadwell EX or E7 chipsets, the number of cores available for the HD SD-X range from 4 to 24 cores per socket.  Configuring a blades with the 24 core E7_v4 (v4 indicates Broadwell) equals 48 cores or 24 Oracle Licenses.  Reference the discussion two paragraphs above.  His assertion is by moving to a larger server you get a larger memory capacity for those “in-memory compute models” and it is this combination that will dramatically improve your database performance while lowering your overall Total Cost of Ownership (TCO).

He uses a customer success story for Pella (windows) who avoided $200,000 in Oracle licensing fees after moving off a UNIX (not AIX in this case) platform to 2 x HPE Superdome X servers running Linux.  This HPE customer case study says the UNIX platform which Pella moved off 9 years ago was actually a HP Superdome with Intel Itanium processors server running HP-UX.  Did you get this? HP migrated off their own 9-year-old server while implying it might be from a competitor – maybe even AIX on Power since it was referenced earlier in the story.  That circa 2006 era Itanium may have used a Montecito class processor. All of the early models before Tukwila were pigs, in my estimation.  A lot of bluff and hyperbole but rarely delivering on the claims.  That era of SD would have also used an Oracle license factor of 0.5 as Oracle didn’t change it until 2010 and only on the newer 95xx series chips.  Older systems were grandfathered and as I recall as long as they didn’t add new licenses they would remain under the 0.5 license model.  I would expect a 2014/2015 era Intel processor would outperform a 2006 era chip, although if it would have been against a POWER5 1.9 or 2.2 GHz chip I might call it 50-50 J .

We have to spend some time discussing HP server technology as Jeff is doing some major league sleight of hand as the Superdome X server supports a special hardware partitioning capability (more details below) that DOES allow for reduced licensing that IS NOT available on non-Superdome x86 servers or from most other Intel vendors unless they also have an 8 socket or larger system like SGI – oh wait, HP just bought them.  Huh, wonder why they did this if the HPE Superdome X is so good.

Jeff then mentions an IDC research study; big deal, here is a note from my Pastor that says the HPE Superdome is not very good; who are you going to believe?

Moving the rest of the blog to Part 2.