Oracle “memoptimize for write” using on IoT and more

Marcelo Ochoa
7 min readAug 24, 2020
Photo by chuttersnap on Unsplash

Two week ago during the Oracle Developer Live, Keynote: The Future is Data-Driven by Juan R. Loaiza| Executive Vice President, Mission-Critical Database Technologies, mention the benefits of memoptimize for write table options, here a video with explanations (24.09" position)

Keynote: The Future is Data-Driven

looking at definition I decided to test by myself and compare this method with other traditional options such as Direct-Load INSERT and JDBC batch insert.

A big picture of memoptimize for write architecture is depicted at following picture

Description of “Figure 12–1 Fast-Ingest with high-frequency inserts.”

As you can see a middle tier application sent row-by-row information to the RDBMS which pack many of then at the SGA to later write to disk by a background process, easy but which is the impact of this write scheme on my apps? Lets prepare our test.

Create a fresh Oracle 19c instance using Docker

To compare different options I am using a fresh Oracle 19c RDBMS instance using docker, here my creation command:

# docker run -ti --name test-19c -p 1521:1521 -p 5500:5500 -v test-19c:/opt/oracle/oradata -v /run/shm:/dev/shm oracle/database:19.3.0-ee

Once the database show is ready to use We need to change the SGA and system parameters file to get memoptimize for write functionality working, connect as SYSDBA and execute:

# docker exec -ti test-19c bash
$ export ORACLE_SID=ORCLCDB
$ sqlplus / as sysdba
SQL> alter system set memoptimize_pool_size=2G scope=spfile;
SQL> alter system set large_pool_size=2G scope=spfile;
SQL> alter system set sga_target=6G scope=spfile;
SQL> alter system set "_exadata_feature_on"=true scope=spfile;
SQL> shutdown immediate
SQL> startup
SQL> exit
  • memoptimize_pool_size is an SGA area used for storing pending rows
  • large_pool_size must be increased to avoid trace errors
  • sga_target default size must be increased to allow above areas
  • _exadata_feature_on internal flag must be changed to allow memoptimize_write functionality

Disclaimer: the Memoptimized Rowstore features is only available on Exadata. Enabling the _exadata_feature_on hidden parameter, unless explicitly instructed by Oracle Support to do so, can cause a violation of your license agreement with Oracle Corporation. I have enabled this hidden parameter for demonstration purposes only and do not recommend you to enable it on your system!

Testing write performance

To get a consistent test scenery I am using SwingBench tool as in my previous post Estimating Micro services max DB throughput, but We need to write our own JDBC test with different create table and DML operations not included by default at SwingBench download. I am created a test suite named testcollection in five files, StressTest.java, StressTestInsert.java, StressTestSelect.java, StressTestUpdate.java and StressTestDelete.java. This testsuite replicates a table structure as is defined in samples of the article Launch & Persist JSON Documents In The Cloud In 10 Minutes Or Less With Autonomous JSON Database changing JSON column using VARCHAR2 instead of BLOB and removing default column values because are not allowed on memoptimize_write functionality.

Note: 4K JSON document limit seem to be a small for many apps but for IoT applications which generates sensor data is enough and obviously you could add many other relational columns with data.

Our table structure is:

CREATE TABLE TESTCOLLECTION
(ID VARCHAR2(255 BYTE),
CREATED_ON TIMESTAMP(6),
LAST_MODIFIED TIMESTAMP(6),
VERSION VARCHAR2(255 BYTE),
JSON_DOCUMENT VARCHAR2(4000),
CHECK (JSON_DOCUMENT is json) ENABLE,
PRIMARY KEY (ID)
) segment creation immediate memoptimize for write PARALLEL 4;
ALTER TABLE TESTCOLLECTION MODIFY(ID NOT NULL ENABLE);
ALTER TABLE TESTCOLLECTION MODIFY(CREATED_ON NOT NULL ENABLE);
ALTER TABLE TESTCOLLECTION MODIFY(LAST_MODIFIED NOT NULL ENABLE);
ALTER TABLE TESTCOLLECTION MODIFY(VERSION NOT NULL ENABLE);

Insert JDBC code is, note memoptimize_write HINT:

insPs = connection.prepareStatement("INSERT /*+ memoptimize_write */ INTO TESTCOLLECTION VALUES (?,?,?,?,?)");
insPs.setString(1, RandomUtilities.randomAlpha(32, 32));
insPs.setTimestamp(2,new Timestamp(System.currentTimeMillis()));
insPs.setTimestamp(3,new Timestamp(System.currentTimeMillis()));
insPs.setString(4, RandomUtilities.randomAlpha(32, 32));
insPs.setString(5, "{\"name\": \"todd\", \"is_cool\": true, \"age\": 43}");
insPs.execute();
connection.commit();
addInsertStatements(1);
addCommitStatements(1);

To add our testsuite to SwingBench put above files at source subdirectory, for example if you downloaded and extracted SwingBench in /opt directory it will look like:

mochoa@localhost:/opt/swingbench/source$ ll com/dom/benchmarking/swingbench/testcollection/
total 28
drwxrwxr-x 2 mochoa mochoa 4096 ago 23 20:15 ./
drwxr-xr-x 10 mochoa mochoa 4096 ago 23 20:15 ../
-rw-r--r-- 1 mochoa mochoa 1706 ago 23 20:39 StressTestDelete.java
-rw-r--r-- 1 mochoa mochoa 2396 ago 24 15:16 StressTestInsert.java
-rw-r--r-- 1 mochoa mochoa 3171 ago 24 11:54 StressTest.java
-rw-r--r-- 1 mochoa mochoa 1733 ago 23 20:36 StressTestSelect.java
-rw-r--r-- 1 mochoa mochoa 1734 ago 23 20:38 StressTestUpdate.java

Compile your code using Ant as:

mochoa@localhost:/opt/swingbench/source$ ./antbuild 
Buildfile: build.xml
clean:
[delete] Deleting directory /opt/swingbench/source/classes
init:
[mkdir] Created dir: /opt/swingbench/source/classes
compile:
[javac] Compiling 88 source files to /opt/swingbench/source/classes
[javac] Note: /opt/swingbench/source/com/dom/benchmarking/swingbench/testcollection/StressTestInsert.java uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
dist:
[jar] Building jar: /opt/swingbench/lib/mytransactions.jar
BUILD SUCCESSFUL
Total time: 2 seconds

finally add Testcollection_Test.xml configuration file into ../configs/ directory of SwingBench.

Running different insert test

First test is using regular JDBC insert and regular table creation, no memoptimize or append/nologging optimizations. Following pictures shows network throughput and CPU utilization for a test launched as:

mochoa@localhost:/opt/swingbench/source$ ../bin/charbench -cs //localhost/orclpdb1 -c ../configs/Testcollection_Test.xml -u scott -p tiger -uc 16 -a -v users,tpm,tps,cpu,disk -mt 1000000 -mr
  • 1 million inserts
  • 16 parallel connections
  • simple JDBC insert
  • default table creation
left side completed test, right side start test moment
  • 2.94 MiB/s maximum throughput, typical 2,8 MiB/s
  • 5077.16 Average Transactions/sec, 5077 row/sec
  • left side image shows interruptions during LogWriter process, IO contention

Second test using JDBC batch insert, a modified Java code is, bold/italic shows modified/add code:

insPs = connection.prepareStatement("INSERT INTO TESTCOLLECTION VALUES (?,?,?,?,?)");
((OraclePreparedStatement) insPs).setExecuteBatch(100);
for (int i=0;i<=100;i++) {
insPs.setString(1, RandomUtilities.randomAlpha(32, 32));
insPs.setTimestamp(2,new Timestamp(System.currentTimeMillis()));
insPs.setTimestamp(3,new Timestamp(System.currentTimeMillis()));
insPs.setString(4, RandomUtilities.randomAlpha(32, 32));
insPs.setString(5, "{\"name\": \"todd\", \"is_cool\": true, \"age\": 43}");
insPs.execute();
}
((OraclePreparedStatement) insPs).sendBatch ();
connection.commit();
addInsertStatements(100);
addCommitStatements(1);
left side completed test, right side start test moment
  • 11.8 MiB/s maximum troughput, typical 6.6 MiB/s
  • 239.4 Average Transactions/sec, 23940 row/sec
  • left side images showing big slowdown during LogWriter process, IO contention

Test using append HINT and table created with nologging attribute, similar test suite (1 millon rows 16 parallel threads)

left side complete test, right side start test moment
  • 3.08 MiB/s maximum througput, 2.90 MiB/s typical
  • 4887.46 Average Transactions/sec, 4887 row/sec
  • less interruptions by LogWriter process as previous one test

Test using append HINT and table created with nologging table attribute, now using JDBC batch inserts

left side comple test, right side start test moment
  • 11.9 MiB/s maximum troughput, 10.9MiB/s typical
  • 272.54 Average Transactions/sec, 27254 row/sec
  • left side image showing big slowdown during LogWriter process, IO contention

Finally memoptimize_write test, first without using JDBC batch inserts

left side complete test, right side start test
  • 13.9 MiB/s maximum througput, 13.6 MiB/s typical
  • 19075.31 Average Transactions/sec, 19075 row/sec
  • No interruption by LogWriter process, 100% of CPU usage

Finally using JDBC batch insert

left side complete test, right side start test
  • 13.5 MiB/s maximum troughput, 13.2 MiB/s typical
  • 370.74 Average Transactions/sec, 37074 rows/sec
  • No interruption by LogWriter process, 100% of CPU usage
  • Less network traffic as previous one example

Conclusion

As We see in last screenshot a memoptimize_write table/insert combined with batch JDBC insert is a winner getting 7x faster row/sec insertions than plain JDBC and regular table mode. Also if We use a typical optimization for large insert using append/nologging combination We get 1.3x faster ingest performance using memoptimize_write hint (37074 row/sec over 27254 row/sec).

Also We see practical NO IO contentention using memoptimize_write mode which mean no blocking transaction on our application side and full CPU usage.

All tests were executed in a notebook with Intel i7 processor and SSD M2 disk which is not a best combination for a RDBMS server, but even with this small hardware We can ingest 1 millon rows in less than a minute (4x faster than append/nologging/batch mode).

Finally memoptimize_write is not only good for IoT applications, there are several situations where you do massive inserts in tables, for example audit log tables, application logging, among others, with this new way to configure tables you will get a lot benefits!!! and remember to pay attention to the above disclaimer and do not break your license term.

--

--