(Please refer to the following article: Oracle 12c In-Memory Database is Out
- Hardly Anybody Notices for update on Oracle 12c databases)
Contemporary large servers are routinely configured with 2TB of RAM. It is
thus possible to fit an entire average size OLTP database in memory directly
accessible by CPU. There is a long history of academic research on how to
best utilize relatively abundant computer memory. This research is becoming
increasingly relevant as databases serving business applications are heading
towards memory centric design and implementation.
If you simply place Oracle RDBMS's files on Solid State Disk, or configure
buffer cache (SGA) large enough to contain the whole database, Oracle will
not magically become an IMDB database, nor it will perform much faster. In
order to properly utilize memory, IMDB databases require purposely
architected, confi... (more)
Most of enterprise class shops today run their Oracle databases on either
HP-UX, AIX or Sun OS operating systems. Is it possible to move these
databases to the public cloud, and, if so, who are providers who can help
with such a move?
Public cloud services are closely related to virtualization i.e. usage of
various flavors of popular Virtual Machines (VMware, XEN). VMs are one of
major ingredients giving cloud services such great characteristics as
scalability, on-demand instant provisioning and deprovisioning of resources.
Virtual Machines are able to run many guest operating s... (more)
Oracle database is a relational database management system that mostly
complies with ACID transaction requirements ( atomicity, consistency,
isolation, durability ). It means that each database transaction will be
executed in a reliable, safe and integral manner. In order to comply with
ACID Oracle database software implements fairly complex and expensive (in
terms of computing resources, i.e., CPU, disk, memory) set of processes like
redo and undo logging, memory latching, meta data maintenance etc. that make
concurrent work possible, while maintaining data integrity. Any databa... (more)
AWS is built on commodity hardware and it is software virtual machine based.
AWS documentation states that:
It's inevitable that EC2 instances will fail, and you need to plan for it.
As a rule of thumb, you should be a pessimist when designing architecture for
That means that putting your Oracle databases on AWS cloud should be
accompanied with carefully thought out fault tolerance and DR procedures.
It is likely that:
An AWS instance running Oracle database will fail
Some of volumes ( EBS storage ) attached to an instance running Oracle
database will fail, i.e., you wi... (more)
Hadoop and AWS are enterprise ready cloud computing, distributed
It is straightforward to add more DataNodes i.e. storage to Hadoop cluster on
AWS. You just need to create another AWS instance and add a new node to
Hadoop cluster. Hadoop will take care of balancing storage to keep level of
file system utilization across DataNodes as even as possible.
Cloudera's distribution of Hadoop includes Cloudera Manager which makes it
simple to install Hadoop and add new nodes to it. Screenshot below shows an
existing HDFS service with two DataNodes. We will expand HDFS by a... (more)