ArangoDB 2.5.1 is available for download.
It mainly adds slow-query log and killing running queries through the Http API and the web-interface.
ArangoDB has an Http interface for retrieving the lists of currently executing AQL queries and the list of slow AQL queries.The added option
--database.slow-query-threshold could be used to change the default AQL slow query threshold value on server start.
Running AQL queries can also be killed on the server. ArangoDB provides a kill facility via an Http interface. To kill a running query, its id (as returned for the query in the list of currently running queries) must be specified. The kill flag of the query will then be set, and the query will be aborted as soon as it reaches a cancellation point.
RethinkDB 2.0 release candidate is now available.
RethinkDB got my attention last year when introducing its changes command which basically provide what could be called “trigger subscription” as a way to subscribe to change notifications in the database.
The 2.0 sounds promising but still requires some testing
FoundationDB 2.0 combines the power of ACID transactions with the scalability, fault tolerance, and operational elegance of distributed NoSQL databases. This release was driven by specific customer feedback for increased language support, network security, and higher-level tools for managing data within FoundationDB.
FoundationDB 2.0 adds Go and PHP to the list of languages with native FoundationDB support.
Along with the additional language and layer support, 2.0 also ships with full Transport Layer Security which encrypts all FoundationDB network traffic, enabling security and authentication between both servers and clients via a public/private key infrastructure.
Also in 2.0, monitoring improvements report more detailed information about potential low-memory scenarios even before they happen.
FoundationDB 2.0 is backwards-compatable with all previous API versions, so any code that you wrote against an old version of FoundationDB will still run; there have been minimal API changes so updating your code to the new API version will be a snap.
Download FoundationDB 2.0
Upgrade as documented here (just remember that you’ll need to upgrade both clients and servers at the same time).
More information on the Google Group
Redis 3.0.0 Beta 1 (version 2.9.50) is out.
Release date: 11 Feb 2014
This is the first beta of Redis 3.0.0 (official version is 2.8.50).
The following is a list of improvements in Redis 3.0, compared to Redis 2.8.
- [NEW] Redis Cluster: a distributed implementation of a subset of Redis.
- [NEW] New “embedded string” object encoding resulting in less cache misses. Big speed gain under certain work loads.
- [NEW] WAIT command to block waiting for a write to be transmitted to the specified number of slaves.
- [NEW] MIGRATE connection caching. Much faster keys migraitons.
- [NEW] MIGARTE new options COPY and REPLACE.
- [NEW] CLIENT PAUSE command: stop processing client requests for a specified amount of time.
Mesos 0.13 has been released and fix many bugs and include the following improvment:
- [MESOS-46] – Refactor MasterTest to use fixture
- [MESOS-134] – Add Python documentation
- [MESOS-140] – Unrecognized command line args should fail the process
- [MESOS-242] – Add more tests to Dominant Share Allocator
- [MESOS-305] – Inform the frameworks / slaves about a master failover
- [MESOS-346] – Improve OSX configure output when deprecated headers are present.
- [MESOS-360] – Mesos jar should be built for java 6
- [MESOS-409] – Master detector code should stat nodes before attempting to create
- [MESOS-472] – Separate ResourceStatistics::cpu_time into ResourceStatistics::cpu_user_time and ResourceStatistics::cpu_system_time.
- [MESOS-493] – Expose version information in http endpoints
- [MESOS-503] – Master should log LOST messages sent to the framework
- [MESOS-526] – Change slave command line flag from ‘safe’ to ‘strict’
- [MESOS-602] – Allow Mesos native library to be loaded from an absolute path
- [MESOS-603] – Add support for better test output in newer versions of autools
Download the most recent stable release: 0.13.0. (Release Notes)
In five years, Apache Cassandra has grown into one of the most widely used NoSQL databases in the world and serves as the backbone for some of today’s most popular applications including as Facebook,Netflix,Twitter.
This newest version, Cassandra 2.0 just announced, includes multiple new features. But perhaps the biggest of them is that “Cassandra 2.0 makes it easier than ever for developers to migrate from relational databases and become productive quickly.”
New features and improvements include:
- Lightweight transactions allow ensuring operation linearizability similar to the serializable isolation level offered by relational databases, which prevents conflicts during concurrent requests
- Triggers, which enable pushing performance-critical code close to the data it deals with, and simplify integration with event-driven frameworks like Storm
- CQL enhancements such as cursors and improved index support
- Improved compaction, keeping read performance from deteriorating under heavy write load
- Eager retries to avoid query timeouts by sending redundant requests to other replicas if too much time elapses on the original request
- Custom Thrift server implementation based on LMAX Disruptor that achieves lower message processing latencies and better throughput with flexible buffer allocation strategies
The Mongo-Hadoop Adapter 1.1 have been released, it makes easy to use Mongo databases, or mongoDB backup files in .bson format, as the input source or output destination for Hadoop Map/Reduce jobs. By inspecting the data and computing input splits, Hadoop can process the data in parallel so that very large datasets can be processed quickly.
The Mongo-Hadoop adapter also includes support for Pig and Hive, which allow very sophisticated MapReduce workflows to be executed just by writing very simple scripts.
- Pig is a high-level scripting language for data analysis and building map/reduce workflows
- Hive is a SQL-like language for ad-hoc queries and analysis of data sets on Hadoop-compatible file systems.
Hadoop streaming is also supported, so map/reduce functions can be written in any language besides Java. Right now the Mongo-Hadoop adapter supports streaming in Ruby, Node.js and Python.
How it Works
How the Hadoop Adapter works
- The adapter examines the MongoDB Collection and calculates a set of splits from the data
- Each of the splits gets assigned to a node in Hadoop cluster
- In parallel, Hadoop nodes pull data for their splits from MongoDB (or BSON) and process them locally
- Hadoop merges results and streams output back to MongoDB or BSON
FieldDB beta was officially launched in English and Spanish on August 1st 2012 in Patzun, Guatemala as an app for fieldlinguists.
More information about FieldDB are available here: https://github.com/OpenSourceFieldlinguistics/FieldDB
OrientDB 1.5 has been released, fix a bunch of issue and bring the following new feature and enhancement:
All the issues: https://github.com/orientechnologies/orientdb/issues?milestone=5&page=1&state=closed
- New PLOCAL (Paginated Local) storage engine. In comparison with LOCAL it’s more durable (no usage of MMAP) and supports better concurrency on parallel transactions. To migrate your database to PLOCAL follow this guide: migrate-from-local-storage-engine-to-plocal
- New Hash Index type with better performance on lookups. It does not support ranges
- New “transactional” SQL command to execute commands inside a transaction. This is useful for “create edge” SQL command to avoid the graph get corrupted
- Import now migrates RIDs allowing to import databases in a different one from the original
- “Breadth first” strategy added on traversing (Java and SQL APIs)
- Server can limit maximum live connections (to prevent DOS)
- Fetch plan support in SQL statements and in binary protocol for synchronous commands too
Upgrade note: https://github.com/orientechnologies/orientdb/wiki/Upgrade
Download link: https://github.com/orientechnologies/orientdb/releases/download/1.5/orientdb-graphed-1.5.0.zip
Neo4j, version 1.9.2 is now available.
- Optimize IO performance on Windows
- Improve procedure to the set up networking for HA clusters
- Some fixes to the REST API
Neo4j 1.9.2 is available immediately and is an easy upgrade from any other 1.9.x versions
You can download from the neo4j.org web site