Cassandra 2.0.0-beta1 have been released

The latest development release , the 2.0.0-beta1, is now available for download:

Full changes list:

  • Removed on-heap row cache (CASSANDRA-5348)
  • use nanotime consistently for node-local timeouts (CASSANDRA-5581)
  • Avoid unnecessary second pass on name-based queries (CASSANDRA-5577)
  • Experimental triggers (CASSANDRA-1311)
  • JEMalloc support for off-heap allocation (CASSANDRA-3997)
  • Single-pass compaction (CASSANDRA-4180)
  • Removed token range bisection (CASSANDRA-5518)
  • Removed compatibility with pre-1.2.5 sstables and network messages(CASSANDRA-5511)
  • removed PBSPredictor (CASSANDRA-5455)
  • CAS support (CASSANDRA-5062, 5441, 5442, 5443, 5619, 5667)
  • Leveled compaction performs size-tiered compactions in L0 (CASSANDRA-5371, 5439)
  • Add yaml network topology snitch for mixed ec2/other envs (CASSANDRA-5339)
  • Log when a node is down longer than the hint window (CASSANDRA-4554)
  • Optimize tombstone creation for ExpiringColumns (CASSANDRA-4917)
  • Improve LeveledScanner work estimation (CASSANDRA-5250, 5407)
  • Replace compaction lock with runWithCompactionsDisabled (CASSANDRA-3430)
  • Change Message IDs to ints (CASSANDRA-5307)
  • Move sstable level information into the Stats component, removing the
  • need for a separate Manifest file (CASSANDRA-4872)
  • avoid serializing to byte[] on commitlog append (CASSANDRA-5199)
  • make index_interval configurable per columnfamily (CASSANDRA-3961, CASSANDRA-5650)
  • add default_time_to_live (CASSANDRA-3974)
  • add memtable_flush_period_in_ms (CASSANDRA-4237)
  • replace supercolumns internally by composites (CASSANDRA-3237, 5123)
  • upgrade thrift to 0.9.0 (CASSANDRA-3719)
  • drop unnecessary keyspace parameter from user-defined compaction API (CASSANDRA-5139)
  • more robust solution to incomplete compactions + counters (CASSANDRA-5151)
  • Change order of directory searching for c*.in.sh (CASSANDRA-3983)
  • Add tool to reset SSTable compaction level for LCS (CASSANDRA-5271)
  • Allow custom configuration loader (CASSANDRA-5045)
  • Remove memory emergency pressure valve logic (CASSANDRA-3534)
  • Reduce request latency with eager retry (CASSANDRA-4705)
  • cqlsh: Remove ASSUME command (CASSANDRA-5331)
  • Rebuild BF when loading sstables if bloom_filter_fp_chance
  • has changed since compaction (CASSANDRA-5015)
  • remove row-level bloom filters (CASSANDRA-4885)
  • Change Kernel Page Cache skipping into row preheating (disabled by default)(CASSANDRA-4937)
  • Improve repair by deciding on a gcBefore before sending
  • out TreeRequests (CASSANDRA-4932)
  • Add an official way to disable compactions (CASSANDRA-5074)
  • Reenable ALTER TABLE DROP with new semantics (CASSANDRA-3919)
  • Add binary protocol versioning (CASSANDRA-5436)
  • Swap THshaServer for TThreadedSelectorServer (CASSANDRA-5530)
  • Add alias support to SELECT statement (CASSANDRA-5075)
  • Don’t create empty RowMutations in CommitLogReplayer (CASSANDRA-5541)
  • Use range tombstones when dropping cfs/columns from schema (CASSANDRA-5579)
  • cqlsh: drop CQL2/CQL3-beta support (CASSANDRA-5585)
  • Track max/min column names in sstables to be able to optimize slice
  • queries (CASSANDRA-5514, CASSANDRA-5595, CASSANDRA-5600)
  • Binary protocol: allow batching already prepared statements (CASSANDRA-4693)
  • Allow preparing timestamp, ttl and limit in CQL3 queries (CASSANDRA-4450)
  • Support native link w/o JNA in Java7 (CASSANDRA-3734)
  • Use SASL authentication in binary protocol v2 (CASSANDRA-5545)
  • Replace Thrift HsHa with LMAX Disruptor based implementation (CASSANDRA-5582)
  • cqlsh: Add row count to SELECT output (CASSANDRA-5636)
  • Include a timestamp with all read commands to determine column expiration(CASSANDRA-5149)
  • Streaming 2.0 (CASSANDRA-5286, 5699)
  • Conditional create/drop ks/table/index statements in CQL3 (CASSANDRA-2737)
  • more pre-table creation property validation (CASSANDRA-5693)
  • Redesign repair messages (CASSANDRA-5426)
  • Fix ALTER RENAME post-5125 (CASSANDRA-5702)
  • Disallow renaming a 2ndary indexed column (CASSANDRA-5705)
  • Rename Table to Keyspace (CASSANDRA-5613)
  • Ensure changing column_index_size_in_kb on different nodes don’t corrupt the
  • sstable (CASSANDRA-5454)
  • Move resultset type information into prepare, not execute (CASSANDRA-5649)
  • Auto paging in binary protocol (CASSANDRA-4415, 5714)
  • Don’t tie client side use of AbstractType to JDBC (CASSANDRA-4495)
  • Adds new TimestampType to replace DateType (CASSANDRA-5723, CASSANDRA-5729)

 

RethinkDB 1.7 has been released

RethinkDB 1.7 has been released and is available for download

This release includes the following features and improvements:

  • Tools for CSV and JSON import and export
  • Support for hot backup and restore
  • ReQL support for atomic set and get operations
  • A powerful new syntax for handling nested documents
  • Greater than 10x performance improvement on document inserts
  • Native binaries for CentOS / RHEL
  • A number of small ReQL improvements (explained below)

See the full list of over 30 bug fixes, features, and enhancements.

Riak 1.3.2 released

Performance and bugfix release on the Riak 1.3 series.
The release notes for Riak can be found here:
https://github.com/basho/riak/blob/1.3/RELEASE-NOTES.md

The packages can be found here:
http://docs.basho.com/riak/1.3.2/downloads/

New Features or Major Improvements for Riak

eLevelDB Verify Compactions

In Riak 1.2 we added code to have leveldb automatically shunt corrupted blocks to the lost/BLOCKS.bad file during a compaction. This was to keep the compactions from going into an infinite loop over an issue that A) read repair and AAE could fix behind the scenes and B) took up a bunch of customer support / engineering time to help customers fix manually.

Unfortunately, we did not realize that only one of two corruption tests was actually active during a compaction. There is a CRC test that applies to all blocks, including file metadata. Compression logic has a hash test that applies only to compressed data blocks. The CRC test was not active by default. Sadly, leveldb makes limited defensive tests beyond the CRC. A corrupted disk file could readily result in a leveldb / Riak crash … unless the bad block happened to be detected by the compression hash test.

Google’s answer to this problem is the paranoid_checks option, which defaults to false. Unfortunately setting this to true activates not only the compaction CRC test but also a CRC test of the recovery log. A CRC failure in the recovery log after a crash is expected, and utilized by the existing code logic to enable automated recovery upon next start up. paranoid_checks option will actually stop the automatic recovery if set to true. This second behavior is undesired.

This branch creates a new option, verify_compactions. The background CRC test previously controlled by paranoid_checks is now controlled by this new option. The recovery log CRC check is still controlled by paranoid_checks. verify_compactions defaults to true. paranoid_checks continues to default to false.

Note: CRC calculations are typically expensive. Riak 1.3 added code to leveldb to utilize Intel hardware CRC on 64bit servers where available. Riak 1.2 added code to leveldb to create multiple, prioritized compaction threads. These two prior features work to minimize / hide the impact of the increased CRC workload during background compactions.

Erlang Scheduler Collapse

All Erlang/OTP releases prior to R16B01 are vulnerable to the Erlang computation scheduler threads going asleep too aggressively. The sleeping periods reduce power consumption and inter-thread resource contention.

This release of Riak EDS requires a patch to the Erlang/OTP virtual machine to force sleeping scheduler threads to wake up a regular intervals. The flag +sfwi 500 must also be present in the vm.args file. This value is in milliseconds and may need tuning for your application. For the Open Source Riak release, the patch (and extra “vm.args” flags) are recommended: the patch can be found at: https://gist.github.com/evanmcc/a599f4c6374338ed672e.

Overload Protection / Work Shedding

As of Riak 1.3.2, Riak now includes built-in overload protection. If a Riak node becomes overloaded, Riak will now immediately respond {error, overload} rather than perpetually enqueuing requests and making the situation worse.

Previously, Riak would always enqueue requests. As an overload situation became worse, requests would take longer and longer to service, eventually getting to the point where requests would continually timeout. In extreme scenarios, Riak nodes could become unresponsive and ultimately crash.

The new overload protection addresses these issues.

The overload protection is configurable through app.config settings. The default settings have been tested on clusters of varying sizes and request rates and should be sufficient for all users of Riak. However, for completeness, the new settings are explained below.

There are two types of overload protection in Riak, each with different settings. The first limits the number of in-flight get and put operations initiated by a node in a Riak cluster. This is configured through the riak_kv/fsm_limit setting. The default is 50000. This limit is tracked separately for get and put requests, so the default allows up to 100000 in-flight requests in total.

The second type of overload protection limits the message queue size for individual vnodes, setting an upper bound on unserviced requests on a per-vnode basis. This is configured through the riak_core/vnode_overload_threshold setting and defaults to 10000 messages.

Setting either config setting to undefined in app.config will disable overload protection. This is not recommended. Note: not configuring the options at all will use the defaults mentioned above, ie. when missing from app.config.

The overload protection provides new stats that are exposed over the /stats endpoint.

The dropped_vnode_requests_total stat counts the number of messages discarded due to the vnode overload protection.

For the get/put overload protection, there are several new stats. The stats related to gets are listed below, there are equivalent versions for puts.

The node_get_fsm_active and node_get_fsm_active_60s stats shows how many gets are currently active on the node within the last second or last minute respectively. The node_get_fsm_in_rate and node_get_fsm_out_rate track the number of requests initiated and completed within the last second. Finally, the node_get_fsm_rejected, node_get_fsm_rejected_60s, and node_get_fsm_rejected_total track the number of requests discarded due to overload in their respective time windows.

Health Check Disabled

The health check feature that shipped in Riak 1.3.0 has been disabled as of Riak 1.3.2. The new overload protection feature serves a similar purpose and is much safer. Specifically, the health check approach was able to successfully recover from overload that was caused by slow nodes, but not from overload that was caused by incoming workload spiking beyond absolute cluster capacity. In fact, in the second case, the health check approach (divert overload traffic from one node to another) would exacerbate the problem.

UnQLite

UnQLite is an Embeddable NoSQL (Key/Value store and Document-store) database engine. Unlike most other NoSQL databases, UnQLite does not have a separate server process. UnQLite reads and writes directly to ordinary disk files. A complete database with multiple collections, is contained in a single disk file. The database file format is cross-platform, you can freely copy a database between 32-bit and 64-bit systems or between big-endian and little-endian architectures.UnQLite features includes:

More information on the official website: http://www.unqlite.org/

Redis 2.6.13 has been released

Redis 2.6.13 has been released, it is a recommended upgrade and especially suggested if you experienced:

1) Strange issues with Lua scripting.

2) Not reconfigured reappearing master using Sentinel.

3) Server continusly trying to save on save error.

(This version of Redis may also help with AOF and slow / busy disks and latency issues.)

* [FIX] Throttle BGSAVE attempt on saving error.
* [FIX] redis-cli: raise error on bad command line switch.
* [FIX] Redis/Jemalloc Gitignore were too aggressive.
* [FIX] Test: fix RDB test checking file permissions.
* [FIX] Sentinel: always redirect on master->slave transition.
* [FIX] Lua updated to version 5.1.5. Fixes rare scripting issues.
* [NEW] AOF: improved latency figures with slow/busy disks.
* [NEW] Sentinel: turn old master into a slave when it comes back.
* [NEW] More explicit panic message on out of memory.
* [NEW] redis-cli: --latency-history mode implemented.

Download: http://redis.io/download

Light Table 0.4 has been released

Light Table 0.4 has been released and can be downloaded  here
Full Changes list include:
  • FIX: change bundle id for Mac .app
  • FIX: make the fuzzy matching take separators into account
  • FIX: setting the exclude path didn’t take effect until restart
  • FIX: remove errant print statement (#405)
  • FIX: pipe separator highlights (#406)
  • FIX: dramatically improve rendering performance.
  • FIX: correctly parse version parts to numbers for comparison.
  • FIX: set syntax needed a better error message and description (#388)
  • FIX: better searching of the PATH on windows
  • FIX: don’t fail startup if a file/folder in a workspace was deleted
  • FIX: default exclude pattern was too greedy
  • FIX: handle semi-colonless JS much better
  • FIX: remove the tab symbols from the solarized theme
  • FIX: workspace buttons no longer overflow
  • FIX: handle the no available client much more gracefully
  • ADDED: the ability to split the window into multiple tabsets
  • ADDED: you can now have multiple windows open (Cmd/Ctrl-Shift-N to open a window, Cmd/Ctrl-Shift-W to close)
  • ADDED: python eval!
  • ADDED: ipython client integration
  • ADDED: nodejs client
  • ADDED: browser tab Browser: add browser tabBrowser: refresh active browser tab
  • ADDED: browser client using chrome-devtools
  • ADDED: Magical JS VM patching for live updates through the devtools integration
  • ADDED: command grouping
  • ADDED: connect tab that now shows which clients are active
  • ADDED: you can now unset a client from an editor
  • ADDED: connect tab now has add connection that lists all available client types
  • ADDED: executing a command by name with a keybinding will prompt you with the keybinding
  • ADDED: token-based auto-complete (press tab after a character)
  • ADDED: trailing whitespace is now removed on save (use the toggle remove trailing whitespace command to disable)
  • ADDED: line-ending detection on save
  • ADDED: You can now eval any arbitrary selection, just select text and press cmd/ctrl+enter
  • ADDED: Better styling for filter lists
  • ADDED: greatly improved startup time
  • ADDED: new folder, new file, rename, and delete to workspace context menu
  • ADDED: workspaces now watch the file system for changes
  • ADDED: Inline inspectable results for Javascript
  • ADDED: Console inspectable results for Javascript
  • ADDED: A greatly improved console with source information
  • ADDED: You can now put the console in a tab via the Console: Open the console in a tab command
  • ADDED: cancelable eval for Clojure and Python
  • ADDED: editor context menu for cut/copy/paste
  • ADDED: Light Table Docs! Docs: Open Light Table's documentation
  • ADDED: Recent workspaces are remembered, added Workspace: Create new workspace
  • CHANGED: clients tab is now connect
  • CHANGED: moved to acorn for Javascript parsing instead of Esprima
  • CHANGED: completely remove JQuery for significant memory performance increases
  • UPDATED: latest codemirror

More details available here:

http://www.chris-granger.com/2013/04/28/light-table-040/

Oracle NoSQL Database 2.0.39 released

Oracle NoSQL Database 2.0.39 has been released and introduce several improvements, a couple of new Oracle product integration points as well as a number of important bug fixes. These new features and fixes include:

– An integration with Oracle Coherence has been provided that allows Oracle NoSQL Database to be used as a cache for Oracle Coherence applications, also allowing applications to directly access cached data from Oracle NoSQL Database. Documentation can be foundhttp://bit.ly/14e6jEP.

– Oracle NoSQL Database Enterprise Edition now has support for semantic technologies. Specifically, the Resource Description Framework (RDF), SPARQL query language, and a subset of the Web Ontology Language (OWL) are now supported. These capabilities are referred to as the RDF Graph feature of Oracle NoSQL Database. The RDF Graph feature provides a Java-based interface to store and query semantic data in Oracle NoSQL Database Enterprise Edition. Documentation can be found http://bit.ly/Y7aQX4.

Find the complete list of changes in the change log.

Changelog: http://bit.ly/ZweZDS
Download: http://bit.ly/yLGVg3

VoltDB v3.2 has been released

VoltDB v3.2 has been  released and can be downloaded here: http://voltdb.com/community/downloads.php

Changes include:

  • Enhanced Support for Live Schema Updates
  • Improved Performance and Resilience of Catalog Updates
  • New Return Status for Snapshot Restore
  • hange to the Default Heartbeat Timeout

 

The following issues have been fixed:

  • Automated snapshots and node failure

    It was possible for automated snapshots to silently stop occurring after a node failed and rejoined the cluster. This did not happen all the time, but could not be corrected without restarting the cluster. This issue has been corrected.

  • The sqlcmd command and stored procedure names

    Previously, the sqlcmd command line tool could not invoke a stored procedure if the procedure name started with a SQL statement keyword, such as “select” or “delete”. This issue has been corrected.

  • Enterprise Manager fails to recognize cluster changes

    In recent versions of VoltDB, it was possible for the Enterprise Manager to start a database cluster but not recognize when the database completed startup. Similarly, if a node failed to rejoin or a recover operation did not complete the Enterprise Manager might not recognize these conditions. The symptom in all cases was that the database or server icon would not stop “spinning” in the Enterprise Manager control panel. These issues are now fixed.

MemSQL ships 2.0. Scales in-memory database across hundreds of nodes, thousands of cores

MemSQL runs on 64-bit Linux. Ideally suited for machines with multi-core processors and at least 8 GB of RAM Download MemSQL

MemSQL goal was to deliver the fastest OLTP database ever. Inspired by the scale and architectures we saw at Facebook, we hoped to help every enterprise leverage in-memory technologies similar to those that leading web companies use.

Customers like Zyngaand Morgan Stanley not only wanted to quickly commit transactions to the database, they also wanted instant answers to questions about how their real-time data compared to historical data. This inspired the MemSQL team to build something new – a solution that supports highly concurrent transactional and analytical workloads at Big Data scale.

Today MemSQL’s real-time analytics platform is available for download. This is the first generally available version of MemSQL that scales horizontally on commodity hardware. It provides the blazing fast performance for which MemSQL is known, and now does it at Big Data scale. Customers have deployed MemSQL across hundreds of nodes and dozens of terabytes of data, and we’ve tested at even greater volumes and velocities. (Check out ourcalculator to get an idea of the number of reads and writes you can perform depending on the size of your cluster.)

This is also the first version to include MemSQL Watch, a visual web-based interface for monitoring and managing your cluster. We expect this to be the beginning of our foray into real-time visualizations as many of our customers look to operationalize their analytics.

Deploying a database can be difficult, so we’ve made it as simple as possible.  for free on our site and take it for a spin. You’ll definitely be impressed by the performance, but you’ll also be impressed by what’s missing:

  • Batched loading – Don’t wait until the middle of the night to refresh your reports.
  • Complicated programming languages (and a limited talent pool) – Use SQL for real-time analytics.
  • An expensive, proprietary box (and a plan to rip and replace it in a few years) – Scale incrementally on commodity hardware.
  • A lengthy implementation cycle – Launch your first MemSQL instance in minutes in the cloud.

Cassandra 1.1.11 has been released

http://cassandra.apache.org/download/

This is a maintenance/bug fix release[1] on the 1.1 series. As always,
please pay careful attention to the release notes[2] and Let us know[3]
right away if have any problems.

[1]: http://goo.gl/QfZlg (CHANGES.txt)
[2]: http://goo.gl/O55QF (NEWS.txt)
[3]: https://issues.apache.org/jira/browse/CASSANDRA
[4]: http://goo.gl/KbiRm (CHEC)