Have it your way with POWER9

IBM POWER offers system footprint and capabilities to meet any client requirement.

Henry Ford is attributed with saying “you can have any color you want, as long as it is black”.  Consumers, whether on the retail or enterprise side like options and want to buy products the way they want them.

IBM’s recently announced AIX, IBMi and Linux capable POWER9 Scale-out servers as seen below or learn more about each here.

P9-portfolio

These 6 systems join the AC922 AI & Cognitive beast using NVLink 2.0 supporting up to 6 x H2O Nvidia Volta GPU’s

With the 6 POWER9 based systems announced February 13, 2018, IBM is offering clients choice – virtually “any color you want”.  With these systems, get a 2 RU (rack unit) or 4RU model, with 1 or 2 sockets in each. Cores ranging from 4 to 24 and memory from 16 GB to 4 TB of system memory.  Internal storage options from HDD, SSD to NVMe plus all of the connectivity options expected with PCIe adapters – except we see newer adapters with more ports running at higher speeds.

Run AIX , IBM i and Linux on a 1 or 2-socket S922 or H922, a 1-socket S914 and a 1 or 2-socket S924 or H924.  Need Linux only, you can choose any of the previously mentioned servers or choose the cost-optimized L922 with 1 or 2-sockets support 32 GB up to 4 TB of RAM.

IBM issued a Statement of Direction as part of a broader announcement the intention to offer AIX clients on the Power based Nutanix solution.  It is reasonable to conclude there will be a POWER9 based Nutanix option as well.  Expecting a POWER9 solution isn’t surprising but being able to run AIX in a non-PowerVM based hypervisor is a big deal.

Looking at the entire POWER portfolio available today for clients, it ranges from the POWER8 based hyper-converged Nutanix, mid-range & Enterprise class POWER8 systems which compliment the POWER9 Scale-out and speciality systems.

 

POWER_portfolio_Feb2018

Whether the solution will be Nutanix running AIX & Linux, an Enterprise server with 192 cores or a 1-socket L922 running PostgreSQL or MongoDB in a lab, businesses can  “have it your way”.

 

 

 

Upgrade to POWER9 – Never been easier!

Delivering more features & performance at a lower cost, the ease and options available to upgrade have never been more compelling.

With an outstanding family of products in IBM’s POWER8 portfolio, it seemed impossible for IBM to deliver a successor with more features, increased performance, greater value, while at a lower price point.  On February 13th, IBM announced the POWER9 Scale-out products supporting AIX, IBM i and Linux while 1st POWER9 announcement occurred December 5, 2017 with the AC922, a HPC & AI beast.

These newly announced PowerVM-based systems consist of 1 & 2 sockets systems supporting up to 4 TB of DDR4 memory.  Starting with the robust 1-socket S914 then accelerating to the 2RU 2-socket S922 and the 4RU 2-socket S924 system. IBM announced sister systems to the S-models purpose-built for SAP HANA.  These systems are the H822 & H824 systems, identical to the S822 & S824. The H-models might also be considered hybrid systems as they come bundled with key software used with HANA while allowing a smaller AIX and IBM i footprint – sort of a hybrid between a S & L model system.  There is also a Linux only model, just as there was with POWER8.  Called the L922, it is a 2-socket though available in a 1-socket configuration.  Each of these systems support up to 4 TB of memory except the S914 which supports up to 1 TB.

Why should businesses consider upgrading to POWER9? If they are running on POWER7 and older systems, Clients will save significant cost by lowering hardware and software maintenance cost.  Moreover, with the increased performance, clients will be able to consolidate more VM’s than ever and reduce enterprise software product licensing as well as its exorbinant maintenance cost.

While Intel cancels Knights Landing and struggles to deliver innovation and performance on their 10nm and 7nm platforms, remaining in a perpetual state of treading water at 14nm, what they are delivering seems to most benefit ISV’s and not businesses.

The traditional workloads such as Oracle, DB2, Websphere, SAP (ECC & HANA), Oracle EBS, Peoplesoft, JD Edwards, Infor, EPIC and more all benefit.  For businesses looking to develop and deploy technologies developed in the 21st Century, these purpose built products deliver new innovations ideally suited for workloads geared toward Cognitive (analytics) and the web. NoSQL products, such as Redis Labs, Cassandra, neo4j or Scylla to open source relational databases products like PostgreSQL or MariaDB.

With the increased performance and higher efficiencies, all software boats will rise running on POWER9.

My team of Architects and Engineers at Ciber Global are prepared to help migrate workloads from your POWER5, POWER6, POWER7 and even POWER8 systems running AIX 5.3, 6.1, 7.1 and 7.1 as well as IBM i v6.1, 7.1, 7.2 and 7.3 to POWER9.

POWER9 supports AIX 6.1, 7.1 and 7.2.  For IBM i, it supports 7.2 & 7.3.  Client systems not at these levels will have our consultants available to guide them on the requirements and their upgrade options.  Whether using Live Partition Mobility, aka the Easy Button to move workloads from POWER6, POWER7 or POWER8 systems to POWER9 or using more traditional methods such as AIX NIM or IBM i Full System Save/Restore, there is likely an approach meeting the businesses needs.

Rest assured, if you have doubts or concerns reach out to my team at Ciber to discuss. And if you don’t already have the Easy Button, IBM is offering a 60-day trial key for clients to upgrade the PowerVM Standard Edition licenses to Enterprise Edition on their P6, P7 or P8 systems making the upgrade to POWER9 not only financially easy but also technically easy.

 

HPE Memory RAS; Excels at being Average

A recent HPE blog stating memory errors are not the end of the world was meant to reassure clients to accept regular & unplanned platform disruptions. In reality what HPE ends up saying is there is little difference with the other commercial Intel server vendors and their own as they all range from below average to average at best.  Just so happens, this specific blog was written by the HPE Server Memory Product Manager who might be forgiven for painting this dire picture only to then present the best alternative; Yes, HPE SmartMemory. *shock*

To HPE’s credit, they have quite a bit of documentation discussing server Reliability, Availability & Serviceability (RAS) features, specifically about their memory subsystem. They are fairly forthright about their strengths and weaknesses of the entry, mid-range and high-end servers. Sadly though, at every level there message is full of qualifiers, limitations and restrictions which require the consumer to wade through and understand all of the requirements.

An HPE whitepaper from February 2016 titled “How memory RAS technologies can enhance the uptime of HPE ProLiant servers” paints a starkly different picture than the blog. The whitepaper states on page 2 in the 2nd paragraph of the introductory summary section “It might surprise you to know that memory device failures are far and away the most frequent type of failure for scale-up servers.“, up to 2X the rate of the next closest part when the memory is configured with a memory protection configuration not better than SDDC+1.  There is another graph that immediately follows this one showing when memory is configured using a protection scheme of DDDC+1 it decreases memory failures by 85%. That is pretty good, yet the value of 85% used in the whitepaper does not jive with the blog which states when using HPE SmartMemory, memory errors are reduced 99.9998% (yes, that is 5 x 9’s).  I call out this discrepancy because right after claiming 5×9’s they point the reader to the very whitepaper I am citing here.

This blog is not meant to define all of the different terms used, you will have to do some of that work. However, it is worth noting that all of the wonderful features touted in the HPE blog, in the HPE whitepaper and may other sources, the consumer will find there are many qualifiers, limitations and restrictions.  Such as.

  1. E5 chips do not support DDDC or DDDC+1
  2. E5 chips only support SDDC or SDDC + rank sparing
  3. Memory sparing consumes (wastes) either 25% or 12.5% of installed capacity
  4. EX chips support SDDC, SDDC + rank sparing, SDDC+1 and DDDC+1
  5. But, DDDC+1 is ONLY using x4 DIMMs and not x8 DIMMs
  6. DDDC+1 requires x4 DIMMs
  7. Advanced ECC is an option used across 2 DIMMs but can only fill 2 of 3 DIMM slots per channel
  8. Memory Mirroring is the most expensive in terms of cost & performance
  9. Memory Mirroring wastes 1/2 of the DIMM slots for the mirror – not usable
  10. Memory Mirroring only allows you to fill 2 of 3 DIMM slots per channel
  11. Memory Mirroring has a potential performance impact for WRITES

Let’s be clear, consumers have 3 primary options to configure memory on any of the Intel servers.

  1. Performance mode which delivers the highest bandwidth with the lowest reliability features. Not an ideal option for in-memory workloads despite the appeal to maximize the bandwidth.
  2. Lockstep Mode meant to strike a balance of slightly decreased bandwidth (can be up to 50%¹) while increasing reliability over performance mode.  Probably the most common option selected.
  3. Memory Mirroring Mode delivers the highest reliability at the expense of wasting 1/2 the memory capacity as well has a slight performance decrease (remember, this mode can only use 2 of the 3 DIMM slots per channel so you already lose 1/3 of the memory capacity).

What is HPE’s response to clients who want increased memory RAS; especially for those in-memory workloads such as SAP HANA?  Buy more expensive E7 based servers to receive slightly higher memory RAS capability OR install more memory on the already RAS-deficient E5 based servers to increase its capacity to utilize memory spare ranks.

Net-net is that HPE is pushing proprietary memory that is far more expensive than the industry standard memory traditionally used with Intel servers that has earned it the reputation as a low-cost leader relative to traditional Enterprise-class systems like IBM POWER or SPARC. That is evident in the SAP HANA space as the systems required to support these in-memory workloads tend to require more capacity; more cores to achieve the core to memory ratio’s and more sockets to achieve more memory capacity with its associated bandwidth.  Yet, HPE remains true to form as regardless of the path taken, it comes with increased cost, limitations, restrictions and qualifications.

Contrast the never-ending “Compromise” Intel options, IBM’s POWER8 servers use Enterprise memory that is “No Compromise”.  This buffered memory offers spare  capacity, spare lanes, memory instruction replay, chipkill and an incredible DDDC +1+1 allowing for multiple DRAM failures before experiencing a system event.  The design point for POWER8 memory is simple: Not to fail!

AS you consider platforms to host in-memory workloads such as SAP HANA, DB2 BLU, consider which basket you want to place all of your eggs into.  A platform with a memory subsystem designed not to fail or a platform with unending limitations as listed above. The choice should be easy – Choose POWER!

 

SAP HANA – could I have extra complexity please?

Just returned from IBM’s Systems Technical University conference held in Orlando having delivered presentations on 4 different topics.

  1. Benefits of SAP HANA on POWER vs Intel
  2. Why IBM POWER systems are datacenter leaders
  3. Only platform that controls Software Licensing
  4. Why DB2 beats Oracle on POWER (implied that it beats Intel).

With the SAP Sapphire conference last week in Orlando, there was a slew of announcements.  Quick reminder for the uninitiated with SAP HANA, that it is ONLY supported on Intel and POWER based systems running one OS; SUSE or RedHat Linux. With that, IBM POWER continues to deliver the best value.

What is the value offered with the POWER stack? Flexibility! It really is that simple.  If I had a mic on the plane as I write this, I would drop it. Conversely, what is the value offered going with an Intel stack? Compromise!

Some of the flexibility offered thru IBM POWER systems are: Scale-up, scale-out, complete virtualization, grow, shrink, move, perform concurrent maintenance, mix workloads: existing ECC workloads on AIX or IBM i with new HANA running Linux all on the same server.  All of this runs using the most resilient HANA platform available.

Why do I label Intel systems as “Compromise” solutions? It isn’t a competitive shot nor FUD.  Listen, as an Client Executive and Executive Architect for an Channel Reseller, I am able to offer my clients solutions from multiple vendors that include IBM POWER and Intel based systems manufacturers.  I’ve made the conscious decision though to promote IBM POWER over Intel.  Why? Because I not only believe in the capabilities of the platform but also having worked with some of the largest companies in the world, I regularly hear and see the impact running Enterprise workloads on Intel based servers has on the business.

If you read my previous blog, I mention a client who just recently moved their Oracle workloads from POWER to Intel.  Within months, they’ve had to buy over $5M in new licenses going from a simple standalone and a few 2-node clusters (all on the same servers) to an 8-node VMware based Oracle RAC cluster.  This environment is having daily stability issues significantly impacting their business.  Yes, their decision to standardize on a single platform has introduced complexity to the business costing them money, resources (exhausted & not having the proper skills to manage the complexity) that impacts their end-users.

The “Compromise” I mention to host SAP HANA on Intel is that everything has to be an asterisk by it – in other words a limitation or restriction – everything requires follow-up questions and research to ensure what the business wants to do, can be done. Here are some examples.
1) VMware vSphere 5.5 initially supported 1 VM per system which has now been increased to 4 VM’s, but with many qualifications.
a) Restricted to 2 & 4 socket Intel servers
1) VM’s are limited to a socket
2) 2 socket server ONLY supports 2 VM’s, 4 socket would be 4 x 1 sockets each
b) Only E5_v2, E5_v3, E7_v2 and E7_v3 chips are supported – NO Broadwell
c) Want to redeploy capacity for other? Appliances certified only for SoH or S4H
uses cannot be used for other purposes such as BW
d) Did I mention, those VM’s are also limited to 64 vCPU and 1 TB of memory each
e) If a VM needs more memory than what is attached to that socket? No problem, you have to add an additional socket and all of its memory – no sharing!
2) VMware vSphere 6.0 just recently went from 1 to 16 VM’s per system.
a) VM’s are still limited to a socket or 1/2 socket.
b) 1/2 socket isn’t as amazing as it sounds.  Since vSphere supports 2, 4 & 8 socket servers, there can be 16 x 1/2 socket VM’s.
c) What there cannot be, is any combination of VM’s >1 socket with 1/2 socket assigned. In other words, a VM cannot have 1.5 or 3.5 sockets. Any VM resource requirement above 1 socket requires the addition of an entire socket.  1.5 sockets would be 2 sockets.
d) Multi-node setups are NOT permitted …. at all!
e) VM’s larger than 2 sockets cannot use Ivy Bridge based systems, only Haswell or Broadwell chips – but ONLY on 4-socket servers.  Oh my gosh, this is making my head hurt!
f) If using an 8-socket system, it only supports a single production VM using Haswell ONLY processors.  NOT Ivy Bridge and NOT Broadwell!
g) VM’s are limited to 128 vCPU and 4 TB of memory
3) VMware vSphere 6.5 with SAP HANA SPS 12 only supports Intel Broadwell based systems. What if your HANA Appliance is based on Ivy-Bridge or Haswell processor technology? “Where is that Intel rep’s business card? Guess I’ll have to buy another one since I can’t upgrade these”
a) VM’s using >4 sockets are currently NOT supported with these Broadwell chips
b) Now, it gets better. I hope you are writing this down – For 2 OR 8 socket systems, the maximum VM size is 2 sockets.  Only a 4 socket system supports 1 VM with 4 sockets.
c) Same 1/2 socket restrictions as vSphere 6.0.
d) Servers with >8 sockets do NOT permit the use of VMware
e) If your VM requirements exceed 128 vCPU and 4 TB of memory, you must move it to a bare-metal system ….. Call me – I’ll put you on a POWER system where you can scale-up, scale-out without of this mess

Contrast all of these VMware + Intel limitations, restrictions, liabilities, qualification or simply said “Compromise” systems to the IBM Power System.

POWER8 servers run the POWER Hypervisor called PowerVM.  This Hypervisor and its suite of features deliver flexibility allowing all physical, all virtual and a combination of physical & virtual resource usage on each system. Even where there are VM limits such as 4 on the low-end system, that 4 could really be 423 VM’s.  I’m making a theoretical statement here to prove the point. Let’s use a 2 socket 24 core S824 server.  3 VM’s, each with 1 core (yes, I said core) for production usage and the 4th VM’s is really a Shared Processor Pool with 21 cores.  Those 21 cores support up to 20 VM’s per core or 420 VM’s. Any non-production use is permitted.

Each PowerVM VM supports up to 16 TB of memory and 144 cores.  VM size above 108 cores requires the use of SMT4 whereas <=108 cores permit SMT8.  Thus, 144 cores with SMT4 is 576 vCPU’s or 4.5X what Intel can do with 4X the memory footprint.  By the way, that 108 core VM would support 864 vCPU’s – just saying!  Note: I need to verify as the largest SMT8 VM may be 96 cores with only 768 vCPU.

Not only can we allocate physical cores to VM’s and NOT limited to 1/2 or full socket increments like Intel, but POWER systems granularity allows for adjustments at the vCPU level.

PowerVM supports scale-out and scale-up.  Then again, if you have heard or read about the Pfizer story for scale-out BW, you might rethink a literal scale-out approach. Read IBM’s Alfred Freudenberger’s blog on this subject at https://saponpower.wordpress.com/2016/05/26/update-sap-hana-support-for-vmware-ibm-power-systems-and-new-customer-testimonials/

While on the subject of BWoH/B4H, PowerVM supports 6 TB per VM whereas the vSphere 6.0 supports is 3 TB and the limitations increase from here.

Do you see why I choose to promote IBM Power vs Intel? When I walk into a client, the most valuable item I bring with me is my credibility.  HANA on Intel is a constant train wreck with constant changes & gotcha’s. Clients currently with HANA on Intel solutions or better yet, running ECC on Intel have options.  That option is to move to a HANA 2.0 environment using SUSE 12 or RedHat v7 Linux on POWER servers. Each server will host multiple VM’s with greater resiliency providing the business the flexibility desired from the critical business system that likely touches every part of the business.

Does your IT shop use a combination wrench?

More and more, IT shops seem inclined to consolidate and simplify their infrastructure to one platform. A mindset that all workloads can or should run on a single platform incorporated into ‘Software-defined this’ and ‘Software-defined that’.  It tantalizes the decision makers senses as vendors claim to reduce complexity and cost.

Technology has become Ford vs Chevy or John Deere vs Case International.  Whereas these four vendors each have some unique capabilities and offerings they are also leaders in innovation and reliability.  For IT shops, there is this perception that only Intel & VMware are viable infrastructure options to deploy every workload type.  Mission / Life critical workloads in healthcare, high-frequency financial transactions, HPC, Big Data, Analytics, emerging Cognitive & AI but also traditional ERP workloads that run entire businesses – SAP ECC, SAP HANA and Oracle EBS are probably the most common that I see as there are also some industry specific ones for Industrial and automotive companies – I’m thinking of Infor.

When a new project comes up, there is little thought given to the platform. either the business or maybe the ISV will state what and how many of server X should be ordered. The parts arrive, eventually getting deployed.  Little consideration is given to the total cost of ownership or the the impact to the business caused by the system complexity.

I’ve watched a client move their Oracle workloads to IBM POWER several years ago. This allowed them to reduce their software licensing and annual maintenance cost as well as to redeploy licensing to other projects – cost avoidance by not having to add net new licensing.  As it happens in business, people moved on, out and up. New people came in whose answer to everything was Intel + VMware.  Yes, a combination wrench.

If any of you have used a combination wrench,  you know there are a few times it is the proper tool. However, it can also strip or round over the head of a bolt or nut if too much pressure or torque is applied. Sometimes the proper tool is a SAE or Metric box wrench, possible a socket, even an impact wrench.  In this clients case, they have started to move their Oracle workloads from POWER to Intel.  Workloads currently running on standalone servers or at most using 2-node PowerHA clusters.  Moving these simple (little complexity) Oracle VM’s to 6-node VMware Oracle RAC clusters that have now grown to 8-nodes.  Because we all know that Oracle RAC scales really well (please tell me you picked up on the sarcasm).

I heard from the business earlier this year that they had to buy over $5M of net-new Oracle licensing for this new environment. Because of this unforeseen expense, they are moving other commercial products to open-source since we all know that open-source is “free” to offset the Oracle cost.

Oh, I forgot to mention.  That 8-node VMWare Oracle RAC cluster is crashing virtually every day.  I guess they are putting too much pressure on the combination wrench!

Oracle is a mess & customers pay the price!

Chaos that is Oracle

Clients are rapidly adopting open source technologies in support of purpose-built applications while also shifting portions of on-premises workloads to major Cloud providers like Amazon’s AWS, Microsoft’s Azure and IBM’s SoftLayer.  These changes are sending Oracle’s licensing revenue into the tank forcing them to re-tool … I’m being kind saying it this way.

What do we see  Oracle doing these days?

  • Aggressively going after VMware environments who use Oracle Enterprise products for licensing infractions
  • Pushing each of their clients toward Oracle’s public cloud
  • Drastically changing how Oracle is licensed for Authorized Cloud Environments using Intel servers
  • Latest evidence indicates they are set to abandon Solaris and SPARC technology
  • On-going staff layoffs as they shift resources, priorities & funding from on-premises to cloud initiatives

VMware environments

I’ve previously discussed for running Oracle on Intel (vs IBM POWER), Intel & VMware have an Oracle problem. This was acknowledged by Chad Sakac, Dell EMC’s President Converged Division in his August 17, 2016 blog in what really amounted to an Open Letter to King Larry Ellison, himself. I doubt most businesses using Oracle with VMware & Intel servers fully understand the financial implications this has to their business.  Allow me to paraphrase the essence of the note “Larry, take your boot off the necks of our people”.

This is a very contentious topic so I’ll not take a position but will try to briefly explain both sides.  Oracle’s position is simple even though it is very complex.  Oracle does not recognize VMware as an approved partitioning (view it as soft partitioning) method to limit Oracle licensing. As such, clients running Oracle in a VMware environment, regardless of how little or much is used, must properly license it for every Intel server under that clients Enterprise (assume vSphere 6+).  They really do go beyond a rational argument IMHO. Since Oracle owns the software and authored the rules they use these subtleties to lean on clients extracting massive profits despite what the contract may say. An example that comes to mind is how Oracle suddenly changed licensing configurations for Oracle Standard Edition and Standard Edition One. They sunset both of these products as of December 31, 2015 replacing both with Standard Edition 2. What can only be described as screwing clients, they halved the number of sockets allowed on a server or in a RAC cluster, limited the number of cpu threads per DB instance while doubling the number of minimum Named User Plus (NUPs). On behalf of Larry, he apologizes to any 4 socket Oracle Standard Edition users but if you don’t convert to a 2 socket configuration (2 sockets for 1 server or 1 socket for 2 servers using RAC) then be prepared to license the server using the Oracle Enterprise Edition licensing model.

The Intel server vendors and VMware have a different interpretation on how Oracle should be licensed.  I’ll boil their position down to using host or cpu affinity rules.  House of Bricks published a paper that does a good job trying to defend Intel+VMware’s licensing position. In their effort, they do show how fragile of ground they sit on with its approach  highlighting the risks businesses take if they hitch their wagons to HoB, VMware & at least Dell’s recommenations.

This picture, which I believe House of Bricks gets the credit for creating captures the Oracle licensing model for Intel+VMware environments quite well. When you pull your car into a parking garage – you expect to pay for 1 spot yet Oracle says you must pay for every one as you could technically park in any of them. VMware asserts you should only pay for a single floor at most because your vehicle may not be a compact car, may not have the clearance for all levels, there are reserved & handicapped spots which you can’t use. You get the idea.

oracle_parking_garage

It simply a disaster for any business to run Oracle on Intel servers. Oracle wins if you do not virtualize, running each on standalone servers.  Oracle wins if you use VMware, regardless of how little or much you actually us.  Be prepared to pay or to litigate!

Oracle and the “Cloud”

This topic is more difficult to provide sources so I’ll just stick to anecdotal evidence. Take it or leave it. At contract renewal, adding products to contracts or new projects like migrating JD Edwards “World” to “Enterprise One” or a new Oracle EBS deployment would subject a business to an offer like this.  “Listen Bob, you can buy 1000 licenses of XYZ for $10M or you can buy 750 licenses of XYZ for $6M, buy 400 Cloud units for $3M and we will generously throw in 250 licenses …. you’ll still have to pay support of course. You won’t get a better deal Bob, act now!”.  Yes, Oracle is willing to take a hit for the on-premises license revenue while bolstering their cloud sales by simply shuffling the Titanic deck chairs. These clients, for the most part are not interested in the Oracle cloud and will never use it other than to get a better deal during negotiations. Oracle then reports to Wall Street they are having tremendous cloud growth. Just google “oracle cloud fake bookings” to read plenty of evidence to support this.

Licensing in the Cloud

Leave it to Oracle Marketing to find a way to get even deeper into clients wallets – congratulations they’ve found a new way in the “Cloud”.  Oracle charges at least 2X more with Oracle licenses on Intel servers that run in Authorized Cloud Environments (ACE). You do not license Oracle in the cloud using the on-premises licensing factor table.  The more VM’s running in a ACE,  the more you will pay vs an on-premises deployment. To properly license an on-premises Intel server (remember, it is always an underlying proof that Oracle on POWER servers is the best solution) regardless if virtualization is used, assuming a 40 core server, would equal 20 Oracle Licenses (Intel licensing factor for Intel servers is 0.5 per core). Assume 1 VMware server, ignoring it is probably part of a larger vSphere cluster.  Once licensed, clients using VMware could theorectially run Oracle as many VM’s as desired or supported by that server. Over-provision the hell out of it – doesn’t matter. That same workload in an ACE, you pay for what amounts to every core.  Remember, if the core resides on-premises it is 1 Oracle License for every 2 Intel cores but in a ACE it is 1 OL for 1 core.

AWS
Putting your Oracle workload in the cloud?  Oracle license rules stipulate if running in AWS, it labels as vCPU’s both the physical core and the hyperthread. Thus, 2 vCPU = 1 Oracle License (OL). Using the same 40 core Intel server mentioned above, with hyperthreading it would be 80 threads or 80 vCPU.  Using Oracle’s new Cloud licensing guidelines, that would be 40 OL.  If this same server was on-premises, those 40 physical cores (regardless of threads) would be 20 OL ….. do you see it?  The licensing is double!!!   If your AWS vCPU consumption is less vs the on-premises consumption you may be ok. As soon as your consumption goes above that point – well, break out your checkbook.  Let your imagination run wild thinking of the scenarios where you will pay for more licenses in the cloud vs on-prem.

Azure
Since Azure does not use hyperthreading, 1 vCPU = 1 core.  The licensing method for ACE’s for Azure or any other ACE if hyperthreading is not used, 1 vCPU = 1 OL.  If a workload requires 4 vCPU, it requires 4 OL vs the 2 OL if it was on-premises.

Three excellent references to review. The first is Oracle’s Cloud licensing document. The second link is an article by Silicon Angle giving their take of this change and the last link is for a blog by Tim Hall, a DBA and Oracle ACE Director sharing his concerns. Just search for this topic starting from January 2017 and read until you fall asleep.

Oracle
Oracle offers their own cloud and as you might imagine, they do everything they can to favor their own cloud thru licensing, contract negotiations and other means.   From SaaS, IaaS and PaaS their marketing machine says they are second to none whether the competition is SalesForce, Workday, AWS, Azure or any other.  Of course, analysts, media, the internet nor Oracle earnings reports show they are having any meaningful success – to the degree they claim.

Most recently, Oracle gained attention for updating how clients can license Oracle products in ACE’s as mentioned above.  As you might imagine, Oracle licenses its products slightly differently than in competitors clouds but they still penalize Intel and even SPARC clients, who they’ll try to migrate into the cloud running Intel (since it appears Oracle is abandoning SPARC).  The Oracle Cloud offers clients access to its products on a hourly or monthly in a metered and non-metered format on up to 4 different levels of software. Focusing on Oracle DB, the general tiers are Standard, Enterprise, High-Performance and Extreme-Performance Packages. Think of it like Oracle Standard Edition, Enterprise Edition, EE+tools, EE+RAC+tools.  Oracle also defines the hardware tier as “Compute Shapes“. The three tiers are General Purpose, High-Memory or Dedicated compute

Comparing the cost of an on-premises perpetual license for Oracle Enterprise  vs a non-metered monthly license for the Enterprise Tier means they both use Oracle Enterprise Edition Database. Remember a perpetual license is a one-time purchase, $47,500 for EE DB list price plus 22% per year annual maintenance.  The Enterprise tier using a High-memory compute shape in the Oracle cloud is $2325 per month.  This compute shape consists of 1 OCPU (Oracle CPU) or 2 vCPU (2 threads / 1 core).  Yes, just like AWS and Azure, Intel licensing is at best 1.0 vs 0.5 for on-premises licensing per core. Depending how a server might be over-provisioned as well as the fact an on-premises server would be fully licensed with 1/2 of its installed cores there are a couple of ways clients will vastly overpay for Oracle products in any cloud.

The break-even point for a perpetual license + support vs a non-metered Enterprise using High-memory compute shape is 30 months.

  • Perpetual license
    • 1 x Oracle EE DB license = $47,500
    • 22% annual maintenance = $10,450
    • 3 year cost: $78,850
  • Oracle Cloud – non-metered Enterprise using High-Memory shape
    • 1 x OCPU for Enterprise Package for High-Compute = $2325/mo
    • 1 year cloud cost = $27,900
    • 36 month cost: $83,700
  • Cross-over point is at 30 months
    • $79,050 is the 30 month cost in the Cloud
  • An Oracle Cloud license becomes significantly more expensive after this.
    • year 4 for a perpetual license would be $10,470
    • 12 months in year 4 for the Cloud license would be $27,900
    • Annual cost increase for a single cloud license over the perpetual license = $17,430
  • Please make your checks payable to “Larry Ellison”

Oracle revenue’s continue to decline as clients move to purpose-built NoSQL solutions such as MongoDB, RedisLabs, Neo4j, OrientDB, Couchbase as well as SQL based solutions from MariaDB, PostgreSQL (I like EnterpriseDB) even DB2 is a far better value.  Oracle’s idea isn’t to re-tool by innovating, listening to clients to move with the market. No, they get out their big stick – follow the classic mistake so many great clients have done before them which is not evolve while pushing clients until something breaks.   Yes, Boot Hill is full of dead technology companies who failed to innovate and adapt. This is why Oracle is in complete chaos.  Clients beware – you are on their radar!

 

 

C is for Performance!

E850C is a compact power-packed “sweet spot” server!

“C” makes the E850 a BIG deal!

IBM delivered a modest upgrade to the entry level POWER8 Enterprise server going from the E850 to the E850C.  The new features are seen with the processors, memory, Capacity on Demand and with bundled software.

The most exciting features available with the new E850C, which by the way comes with a new MTM of 8408-44E, are with the processors.  You might think I’d say that but here is why the E850C is the new “sweet spot” server for AIX & Linux workloads that require a mix of performance, scalability and reliability features.

A few things that are the same on the E850C as it was the E850.

  • Classified as a “small” tier server
  • Available with a 3 year 24 x 7 warranty
  • PVU for IBM software is 100 when using AIX
  • PVU for IBM software is 70 when using Linux
  • Supports IFL’s or Integrated Facility for Linux
  • Offers CuOD, Trial, Utility and Elastic CoD
  • Does NOT offer mobile cores or mobile memory (boo hiss)
  • Does NOT support Enterprise Pools (boo hiss)

The original 8408-E8E aka E850 was available with 32 cores at 3.72, 40 cores at 3.35 and 48 cores at 3.02 GHz, initially support 2 TB of DDR3 memory and eventually up to 4 TB of DDR3 of memory.  Using up to 4 x 1400W power supplies, due to its dense packaging what it did not offer was the option to exploit EnergyScale allowing users to decrease or increase the processor clock speeds.  The clock speeds were capped at their nominal speeds of 3.72, 3.35 and 3.02 GHz not allowing users to select if one of several options from do nothing to lower or increase based on utilization or lower to a set point and more importantly, increase to the higher rate.  This is free performance – rPerf in the case of AIX.

Focusing on the processor increase, because who the hell wants to run their computers slower, the E850C has a modest increase ranging from 2.5% to 4.6%.  I say modest because the other POWER8 models range from 4% up to 11% <play Tim Allen “grunt” from Home Improvement>.  This modest increase doesn’t matter because the new C model delivers 32 cores at 4.22 nominal increasing to 4.32 GHz, 40 cores at 3.95 nominal increasing to 4.12 GHz and 48 cores at 3.65 nominal increasing to 3.82 GHz.  These speeds are at the high end for every Scale-out server and consistent with on part with the E870C/E880C models.

Putting these performance increases into perspective; comparing nominal rPerf values for the E850 vs E850C show this: 32 core E850C with an increase of 59 rPerf. 40 core E850C with an increase of 88 rPerf and the 48 core E850C delivering a rPerf increase of 113.  By doing nothing but increasing the clock speed, the 48 core E850C is delivering an rPerf increase equivalent to a POWER6 570 with 16 cores.

It hasn’t been mentioned yet but the E850 & E850C uses a 4U chassis. Looking at the 48 core E850C just mentioned, it delivers an rPerf level of 859. Compare this to the 16U POWER7+ 770 (9117-MMD) with 64 cores that delivers only 729 rPerf or going back to the initial 770 model 9117-MMB with 48 cores in a 16U footprint delivering 464 rPerf. Using the MMD values, this is a 4:1 footprint reduction, an 18% increase in rPerf with a 25% reduction in cores – why does that matter? Greater core strength means fewer OS & virtualization licenses & SWMA but more importantly – less enterprise software licensing such as Oracle Enterprise DB.

IBM achieved this a couple of ways. Not being an IBMer, I do not have all of the techniques but by increasing the chip efficiency, increasing the power supplies to 2000W each and moving to DDR4 memory which uses less power.

What else?

Besides the improvement in clock speeds and bumping memory to DDR4, the E850C reduces the number of minimum active cores. Every E850C must have a minimum of 2 processor books; 2×8, 2×10 or 2×12 core  while only requiring 8, 10 or 12 cores being active depending on the model of processor book used.  The E850 required all cores in the first 2 processor books to be active. This change in the E850C is another benefit to clients to get into the “sweet spot” server with a lower entry price.  Same memory activations of 50% of the installed memory or 128 GB whichever is more.

A couple of nice upgrades from the E850 that are now standard. Active Memory Mirroring and PowerVM Enterprise Edition are now standard while still offering a 3 year 24 x 7 warranty (except Japan).

The E850C does not support IBM i, but it does support AIX 6.1, 7.1 and 7.2 (research specific versions at System Software Maps) and the usual Linux distro’s.

Software bundle enhancements over the E850 are:

  • Starter pack for SoftLayer
  • IBM Cloud HMC apps
  • IBM Power to Cloud Rewards
  • PowerVM Enterprise Edition

Even though it isn’t bundled in, consider using IBM Cloud PowerVC Manager, which is included with the AIX Enterprise Edition bundle or à la carte with AIX Standard Edition or any Linux distro.

In summary

The E850C is a power-packed compact package. With up to 48 cores and 4 TB Ram in a 4U footprint, it is denser than 2 x 2U S822’s with 20 cores / 1 TB RAM or the 1 x 4U S824 with 24 cores / 2 TB RAM.  Yes the E870C with 40 cores or the E880C with 48 cores, both with 8 TB of RAM in a single node still require 7U to start with.  If clients require the greatest scalability, performance, flexibility and reliability they should look at the E870C or E880C but for a lower entry price that delivers high performance in a compact solution the E850C delivers the complete package.