Announcing the Official Node.js Connector for the Oracle Database by Oracle

Oracle has released a new Node.js driver for Oracle Database on GitHub!

This is exciting for the Node community. The interest in Node applications that connect to the widely available Oracle Database is being recognized and rewarded with a driver that has been designed from the ground up for performance, scalability and usability.

Releasing source code on GitHub is itself big news for the Oracle Database development community. It is part of the shift to making content highly available to Oracle users and lets developers be more efficient at adopting technology and creating new generations of applications.

StrongLoop is one player with feet in both Node and Oracle communities. They have supported and given advice on the development of node-oracledb, even though it seemingly competes with their own driver. What node-oracledb actually does is let StrongLoop focus their efforts in making products like the LoopBack framework and the StrongLoop Arc GUI and platform even more successful. The maintenance of the database access layer can now transition to Oracle, where their expertise is, freeing StrongLoop to add value above it.

oracle2

Technology

Following the footsteps of all good open source projects, the node-oracledb driver is currently at an arbitrarily chosen, low “0.2” release number. But it has already been called “very stable and fast”. It also has a lot of features.  It currently supports:

  • SQL and PL/SQL Execution
  • Binding using JavaScript objects or arrays
  • Query results as JavaScript objects or array
  • Conversion between JavaScript and Oracle types
  • Transaction Management
  • Connection Pooling
  • Statement Caching
  • Client Result Caching
  • End-to-end tracing
  • High Availability Features:
    • Fast Application Notification (FAN)
    • Runtime Load Balancing (RLB)
    • Transparent Application Failover (TAF)

Database features such as the native JSON data type support introduced in Oracle Database 12.1.0.2 can be used directly with node-oracledb.

The node-oracledb driver itself uses Oracle’s C API for performance. This makes it a “thick client” driver requiring Oracle’s client libraries. These are free and easy to install.  They allow node-oracledb to take advantage of the significant feature set, engineering, and testing invested in those libraries. Scalable and highly available applications can be built.

Many of node-oracledb‘s features are immediately available to an application. For example, statement caching is enabled by default. The feature helps reduce unnecessary processing and network overhead of statement parsing when a statement is re-executed. Instead of requiring node-oracledb to have explicit parse and execute methods, by taking advantage of Oracle’s client library statement caching, the node-oracledb API is simplified.

Currently node-oracledb has just three classes, each with a small number of methods.

  • OracleDb
  • Pool
  • Connection

You can create connections from the top level OracleDb object but it is recommended to create a pool first. This is a Node-side pool of connections. There are advantage to applications in having a pool of connections readily available. There are also other advantages because the underlying implementation lets advanced Oracle high availability features be used only by pooled connections. These include Fast Application Notification and Runtime Load Balancing for connections.

Examples

A basic query example is simple:

The output, with Oracle’s HR schema is:

Database connection strings use standard Oracle syntax.  The example shown connects to the XE database service on the local host. Connection identifiers from an Oracle network tnsnames.ora file may also be used.

Query results can be fetched as objects by setting an output format property.  This format can be a little bit less efficient for the driver to generate, so it is not the default:

The output would be:

The driver allows binding by name or position. The next examples shows binding by position. The connection.execute() bind parameter is an array of values, each being mapped to a bind variable in the SQL statement.  Here there is just one value, 70, which is mapped to :id:

GitHub has more examples.

Installation

Installation is currently via GitHub:

1. Install the small, free Oracle Instant Client libraries or have a local database such as the free Oracle XE Database release. On Linux you can simply install the Instant Client RPMs and run ldconfig to add the libraries to the run-time link path.

2. Clone the node-oracledb repository or download the ZIP file.

3. Run npm install

The repo’s INSTALL file has details and steps for several configurations.

Documentation

The API documentation can be found here.

What’s next?

  • Oracle is actively working to bring node-oracledb to a 1.0 release soon.  We are beginning to fill in basic feature support.  Things on the “todo” include LOBs, being on npmjs.com, and building on Windows.
  • Like every development team, we also have longer term plans and dreams.  However these will depend on the adoption of the driver and the direction users want it to go in.
  • We had a lot of interesting development discussions about the intricacies of how, for example, connection pooling and date handling should work. We are actively soliciting feedback on node-oracledb so we can make it work for you. Raise issues on GitHub or post feedback and questions at our Oracle Technology Network forum.
  • Node-oracledb has an Apache 2.0 license. Contributions can be made by developers under the Oracle Contributor Agreement.
  • As we enhance node-oracledb you can follow updates at https://blogs.oracle.com/opal and https://jsao.io/

Useful Links

StrongLoop Node.js Weekly Review – Jan 26, 2015

Here’s this week’s recap of the Node.js related content we posted in the last week or so, plus StrongLoop related articles we came across on the web.

Blogs

Continuous Integration in the Cloud: Comparing Travis, Circle and Codeship

Continuous Integration (CI) is an essential part to any modern development process. Gone are the days of monolithic releases with massive changes, today it’s all about releasing fast and often. Most teams have come to rely on some sort of automated CI system. In this article we are going to talk about some of the benefits of CI and how it fits into small, medium and large projects followed by a quick overview of three different hosted CI services and how they apply to projects of various sizes.

Read more…

What’s New in io.js 1.0 Beta? – Streams3

Node Streams are a powerful way to build modules and applications that deal with large streams of data. The Streams API has gone through a few revisions and has steadily been improving. Streams 1 introduced push-streams to allow developers to consume data efficiently. Streams 2 added pull-streams in addition to push-streams to allow advanced use-cases, however, the two styles could not be used together. Streams 3 solves this issue in an elegant manner and allows the same stream to be used in both push and pull mode. Streams3 is available in Node v0.11/v0.12 and io.js. Read on to dig into the details.

Read more…

Creating REST APIs and Clients with LoopBack and AngularJS

by Valeri Karpov

Recently, I’ve been looking into StrongLoop’s LoopBack framework. LoopBack generates Express REST APIs by asking you a few simple questions at the command line. LoopBack lets you swap out different storage layers. For each model you define, you can choose to store it in MongoDB, Oracle, MySQL, or Microsoft SQL Server (or even in memory). Say you decide to store your users in MongoDB but your user’s gift cards in MySQL (for transactions). Even if you started writing your code with gift cards stored in MongoDB, LoopBack’s database abstraction layer makes switching a one-liner. Furthermore, LoopBack has SDKs for generating REST API clients in AngularJS, Android, and iOS. In short, LoopBack is a powerful tool for generating REST APIs that you can extend to scaffold client-side code.s.

Read more…

All LoopBack Node.js Sample Apps Now Conveniently Linked from a Single Repository

Over the last few months we’ve created quite a few sample apps to help you test out the various features of the Loopback framework without having to code up an example from scratch. For instance, sample apps that allow you to test out connectivity to databases plus the ability to quickly see in action features like model relations, application logic and access control.

Read more…

Meet Jordan Kasper – StrongLoop Developer Evangelist

Now, this is a story all about how my life got flipped – turned upside down – and I’d like to take a minute, just sit right there, I’ll tell you how I became the Developer Evangelist of a place called StrongLoop. Well, that didn’t turn out quite as well as I’d hoped, but you get the gist. I’m the newest employee at StrongLoop and will be stepping into the Developer Evangelist role! I’ll elaborate on what I’ll be working on later, but first, I wanted to give you some background as you’ll be hearing from me a lot.

Read more…

Videos

Getting Things Done “The Node Way”

In this presentation, Fred Schott from Box will give a presentation covering how to do things the Node Way. Including discussions on what this means to the community, coding and async. For more information visit: thenodeway.io/

Watch…

Lessons Learned: Building Authentication Libraries for Node

In this talk, Randall Degges, Developer Evangelist at StormPath shares some of the best practices he learned while building Stormpath’s Express.js authentication libraries. Learn how to: safely log users into web applications, secure REST APIs, the low-level details that make this possible, and which Node libraries you should be using (and where).

Watch…

Understanding V8 Garbage Collection, Memory Leaks and Profiling to Tune Node Apps

In this talk, Shubhra Kar – StrongLoop Director of Products and Node.js trainer/evangelist, will dive into three essential areas you should be looking at to identify performance tuning opportunities in your Node apps. First, we’ll look at how and why to perform CPU and Heap profiling. Second, how to troubleshoot memory leaks and understand the difference between rapid and slow leaks. Finally, we’ll do a deep dive into how V8′s garbage collection works and the role it plays in optimizing Node apps.

Watch…

Blogs in Portuguese

Dica de Desempenho da Semana Node.js: Heap Profiling

Node nos proporciona grande poderes e com grande poderes vem grande responsabilidades. Especialmente para as aplicações de grande porte, aplicações distribuídas, que são conhecidas por se beneficiarem ao máximo do Node. A capacidade de traduzir JavaScript em linguagem nativa de máquina ao invés de interpretar como bytecode, combinado com a programação assíncrona, permitindo Entrada/Saída não bloqueante, é o núcleo que faz com que o Node seja tão rápido e poderoso.

Read more…

Dica de Desempenho da Semana Node.js: Gerenciamento da Coleta de Lixo

Em nossa última dica semanal sobre desempenho, nós discutimos em detalhes como o loop de evento do Node.js funciona como o orquestrador de requisições, eventos e callbacks. Também solucionamos um caso de loop bloqueante, o que poderia causar estragos no desempenho da aplicação. No post desta semana vamos mergulhar nos fundamentos do coletor de lixo (GC) da V8 e como ele mantém as “chaves para o reino” da otimização em aplicações Node. Também vamos ver algumas ferramentas para classificar problemas de GC e de gerenciamento de memória na V8.

Read more…

Blogs in Spanish

¿Qué hay de nuevo en io.js 1.0 Beta? – Streams 3

Los streams de Node son una forma poderosa de construir módulos y aplicaciones que manejan grandes streams de data. La API de Streams ha pasado por varias revisiones y ha estado mejorando establemente. Los Streams 1 introducieron push-streams para permitir a los desarrolladores consumir data eficientemente. Los Streams 2 agregaron pull-streams en adición a push-streams para permitir casos de uso más avanzados, sin embargo, los dos estilos no pueden utilizarse juntos. Los Streams 3 resuelven este problema de una manera elegante y permite al mismo stream ser utilizado tanto en modo push cómo en modo pull. Los Streams 3 están disponibles en Node v0.11/v0.12 y io.js. Continua leyendo para entrar en los detalles.

Read more…

What’s next?

  • Ready to develop APIs in Node.js and get them connected to your data? Check out the Node.js LoopBack framework. We’ve made it easy to get started either locally or on your favorite cloud, with a simple npm install.

 

Continuous Integration in the Cloud: Comparing Travis, Circle and Codeship

Continuous Integration (CI) is an essential part to any modern development process. Gone are the days of monolithic releases with massive changes, today it’s all about releasing fast and often. Most teams have come to rely on some sort of automated CI system. In this article we are going to talk about some of the benefits of CI and how it fits into small, medium and large projects followed by a quick overview of three different hosted CI services and how they apply to projects of various sizes.

Continuous Integration

Wikipedia defines Continuous Integration as

… the practice, in software engineering, of merging all developer working copies with a shared mainline several times a day.

Continuous Integration is very frequently accompanied by Continuous Delivery/Deployment (CD) and very often when people talk about CI they refer to both.

CI relies on two main principles:

  1. Changes are merged to main source branch as often reasonably possible. The tasks are explicitly split up in such a way so that to avoid creating gigantic change sets.
  2. Each change is fully tested. Automated testing is the cornerstone of CI. In a team environment, and even on a personal project, it’s nearly impossible to insure that latest changes don’t break existing code without tests. Every time a change set is merged to master, CI runs entire suite to guarantee nothing was impacted negatively.

Open Source Projects

If you have an open source project, whenever it’s a tiny NPM module or a large application, if it’s publicly hosted on GitHub or BitBucket and has automated tests, you can take advantage of CI right away. You will immediately see some benefits:

  1. Each time you push new changes, CI will pull your latests code, build, configure and run your tests in a clean environment. This means that if you forgot to commit a required library for example and the tests pass locally, they will fail on CI letting you know about the omission.
  2. At a minimum you could show your users on the README page that you have a passing build. At best, if you have hooked up a Code Coverage tool you could show how much of the code is exercised by the automated test.

The configuration for CI in such projects is generally boils down to installing dependencies and running the tests. Any self-respecting hosted CI provider can handle this type of projects and given that majority of open source libraries are maintained by a sole developer, the builds for such projects don’t need to be happening very often nor they are very complicated.

In an effort to support and give back to open source, some CI providers offer free plans for open source projects. So if you have been working on one and don’t yet have CI hooked up, I recommend doing that right after finishing the article. You might have seen these build passing badges around GitHub. After setting up your project with a CI provider, you can then add a badge to the README file via shields.io to proudly show of your passing build.

Read more

¿Qué hay de nuevo en io.js 1.0 Beta? – Streams 3

Originally authored by Krisha Raman, translated by Alejandro Oviedo

Los streams de Node son una forma poderosa de construir módulos y aplicaciones que manejan grandes streams de data. La API de Streams ha pasado por varias revisiones y ha estado mejorando establemente. Los Streams 1 introducieron push-streams para permitir a los desarrolladores consumir data eficientemente. Los Streams 2 agregaron pull-streams en adición a push-streams para permitir casos de uso más avanzados, sin embargo, los dos estilos no pueden utilizarse juntos. Los Streams 3 resuelven este problema de una manera elegante y permite al mismo stream ser utilizado tanto en modo push cómo en modo pull. Los Streams 3 están disponibles en Node v0.11/v0.12 y io.js

Continua leyendo para entrar en los detalles.

Streams 1 (Push streams)

En la implementación original de streams un evento de data era generado todas las veces que la data estaba disponible en el stream.

Los desarrolladores podían usar pause() y resume() para controlar el flujo. Llamando pause()causaría que la implementación dejara de mandar eventos de data.

Read more

Dica de Desempenho da Semana Node.js: Heap Profiling

Node nos proporciona grande poderes e com grande poderes vem grande responsabilidades. Especialmente para as aplicações de grande porte, aplicações distribuídas, que são conhecidas por se beneficiarem ao máximo do Node. A capacidade de traduzir JavaScript em linguagem nativa de máquina ao invés de interpretar como bytecode, combinado com a programação assíncrona, permitindo Entrada/Saída não bloqueante, é o núcleo que faz com que o Node seja tão rápido e poderoso.

Node executa suas aplicações no motor JavaScript V8 da Google, que utiliza uma estrutura heap semelhante a JVM e a maioria das outras linguagens. E, como a maioria das outras linguagens, existe muitas armadilhas comuns que podem levar a um mau desempenho causado por vazamentos de memória. Assim, gerenciar o heap é vital para manter um ótimo desempenho e eficiência.

Apenas coloque mais memória no problema, certo?

Alguns argumentam que apenas reiniciar a aplicação ou utilizar mais RAM é o necessário e que os vazamentos de memória não são fatais no Node. No entanto, a medida que os vazamentos crescem, a V8 fica cada vez mais agressiva na coleta de lixo. E isso se manifesta com uma alta freqüência e a coleta de lixo gasta um tempo maior, diminuindo cada vez mais a velocidade da aplicação. Assim, vazamento de memória prejudica o desempenho no Node.

Os vazamentos podem muitas vezes ser assassinos mascarados. Código com vazamento pode ficar com referências a recursos limitados. Você pode ficar sem descritores de arquivo ou de repente pode não ser capaz de abrir novas conexões de banco de dados. Então pode parecer que o backendesteja falhando a aplicação, mas na verdade é um problema de container.

Como destaque desta semana, vamos cobrir o heap profiling do StrongOps. Uma de muitas métricas monitoradas por StrongOps é o tamanho e o uso da pilha de suas aplicações Node ao longo do tempo. Isso permite que você vá a fundo na heap V8 e isto o ajuda a identificar a causa de qualquer vazamentos de memória. O que é StrongOps? É um DevOps e um dashboard de monitoramento de desempenho para aplicações Node. Aqui está um vídeo de introdução de apenas um minuto para você aprender mais.

Read more