Today’s tip focuses on APIs – which just so happen to be the primary use case for Node.js!  APIs are driving the new frontedge – inclusive of mobile, tablets, wearables, web browsers, sensors, et al by providing rich backend data access in a flexible interface. These APIs are being increasingly generated using API frameworks, which act as “super glue” tying together the endpoints of disparate enterprise systems and data sources, and then exposing them in a uniform API to all clients.

API development usually starts on the backend with constructing a data access layer, modeling data objects and defining relationship mapping. Today, we will focus on Models and Model driven development of APIs in Node. The origins of modeling lies in a software design paradigm called “Convention over Configuration.”

Convention over Configuration

tumblr_lvfxf8hHJg1qf7jblo3_1280.jpgVacheron Constantin Métiers d’Art Mécaniques Ajourées movement 1.jpg

Convention over configuration (also known as coding by convention) seeks to decrease the number of decisions that developers need to make, gaining simplicity, but not necessarily losing flexibility. This means a developer only needs to specify unconventional aspects of the application. For example, if there’s a class Sale in the model, the corresponding table in the database is called sales by default. It is only if one deviates from this convention, such as calling the table sale, or product_sold that one needs to write code regarding these names.

When the convention implemented by the tool matches the desired behavior, it behaves as expected without having to write configuration files. Only when the desired behavior deviates from the implemented convention is explicit configuration required.

ASP .Net MVC does this by default. When you create a controller named Home the class is HomeController and the views of the HomeController are found in /Views/Home for your project. When dealing with IoC containers there’s the task of registering your types to resolve correct.

Some popular frameworks who use convention over configuration are :

Node Specific

Newer modeling approaches

We have seen a couple new modeling approaches pop up in API development space, particularly for Node:

  1. Object Relationship Mapping (ORM)
  2. Object Datasource Mapping (ODM)

Object Relational Mapping

ORM was made popular in the J2EE world by engines like Hibernate, but is a widely accepted model across many frameworks and languages.

Object Relationship Mapping

Relational databases are at the core of most enterprise applications. However, when you map a relational database to objects, it becomes a challenge. Object relational mapping (ORM) is a programming framework that allows you to define a mapping between application object model and the relational database.

Traditionally, an Object Model (OM) deals with object oriented “blue-print” of your system. This includes, class diagrams (classes you will be creating), relationship between these classes, methods in the classes, properties etc. Typically, it is not aware of the database structure. Objects have properties and references to other objects.

The Data Model (DM) deals with entities at the database level. Like how the classes in the OM will get stored and in which tables. So, DM deals with table schema and the relationships between different tables (PKs, FKs) etc.

Databases consist of tables with columns that maybe related to other tables. ORM provides a bridge between the relational database and the object model. Object-relational mapping (ORM, O/RM, and O/R mapping) creates, in effect, a “virtual object database” that can be used from within the programming language and is database vendor independent. Additionally, ORM can significantly reduce the lines of code needed to be written by providing a higher level of abstraction.

 Object Datasource Mapping (ODM)

 

Object Datasource Mapping (ODM) mongoose

Traditionally, Data Models deal with entities at the persistence level. Like how the classes in the Object Model will get stored in the database, in which tables etc. So, Data Model deals with Table schema, relationship between different tables (PKs, FKs) etc.

However, NoSQL and unstructured databases like MongoDB enabled the creation of Node packages like Mongoose that provide a straight-forward, schema-based solution for modeling your application data and includes built-in type casting, validation, query building, business logic hooks and more. We call this as ODM (Object Datasource Mapping). Interestingly, the MongoDB model tooling is also called “Object Document Mapping” as MongoDB stores data in JSON document format.

Due to the absence of SQL queries and planners, NoSQL databases get better performance with ODM. Additionally, there is no SQL to write, which reduces the possibility of SQL injections.

Modeling gurus warn of pitfalls

 

modelling Pitfall

ORM can have a few rough edges in the case of “Logic Intensive Systems” i.e. systems that perform complex calculations, make optimizations, determinations, decisions, etc. This makes persistence easier by locking the object model to the database model that can make writing and consuming the business logic harder. For example, in a system that uses business objects that are basically codegen’d one to one from a legacy codebase with 400+ tables, the temptation and driver for codegen’ing the business objects is obvious (400+ tables).

The problem is that consuming these business objects in the service layer is that the business objects do not really reflect the behavior of the business logic. Not having the encapsulation of the raw database structure from the service layer makes it even worse.

Just some examples:

  • Big tables don’t map to a single object.  It’s not possible that a class with a 100 different properties can possibly be cohesive. We’d have better business logic if that 100 column table is modeled in the middle tier by a half dozen classes, each with a cohesive responsibility.  Only one table for the entire object hierarchy may look ok, but big classes are almost always a bad thing.
  • Data Clump and Primitive Obsession code smells.  A database row is naturally flat.  But think about a database table(s) with lot’s of something_currency/something_amount combinations.  There’s a separate object for Money wanting to come out.  To make your business objects pure representations of the database you can end up with lots of duplicate logic for currency & quantity conversions.
  • Natural cases for polymorphism in your object model.  I think the roughest part of O/R mapping is handling polymorphism inside the database. Check out Fowler’s patterns on database mappings for inheritance.

ODM on the other hand is proprietary to each type of NoSQL backend like MongoDB and cannot deal with RDBMs based table-model structures. With no schema enforcement at the DB level and objects stored as flexible JSON documents, model validations will need to be handled at an application level. It too does not have complex OO features like polymorphism, inheritance, overloading etc which are usually listed in an OM, making it impractical to use with enterprise systems in addition too the below issues:

  • Automatic persistence of updated documents. If you change an instance of an application JSON document, it’s up to you to make sure that change gets persisted back to MongoDB. This leads to more verbose code and generally more errors, as it’s easy to forget to .save() a document when you’re done.
  • An Identity Map. If you’re working with several documents at once, you can get into a situation where you have two documents in memory that both represent the same document in MongoDB. This can cause consistency problems, particularly if the documents are both modified, but in different ways.
  • A Unit of Work. When you’re doing several updates, it’s nice to be able to “batch up” the updates and flush them to MongoDB all at once, particularly in a NoSQL database like MongoDB which has no multi-document transactions. You don’t get true transactional behavior with a unit of work, but  can at least skip the flush step and the database doesn’t change.
  • Support for relationships between documents. I like to be able to construct object graphs in RAM that aren’t necessarily represented by embedded documents in MongoDB.

Introducing Loopback

loopback-logo-sm

 

StrongLoop created LoopBack, a fully open-source, Node API framework on top of Express which assimilates the best practices of both types of modeling. Because it extends from express.js, it follows convention over configuration.

LoopBack simplifies and speeds up REST API development. It consists of:

  • A library of Node.js modules for connecting web and mobile apps to data sources such as databases and REST APIs.
  • A command-line tool (slc), for creating and working with LoopBack applications.
  • Client SDKs for native and web-based mobile clients.

A LoopBack application has three components:

loopback 3 components diagram

  • Models that represent business data and behavior.
  • Data sources and connectors.
    • Data sources are databases or other backend services such as REST APIs, SOAP web services, or storage services.
    • Connectors provide apps access to enterprise data sources such as Oracle, MySQL, and MongoDB.
  • Mobile clients using the LoopBack client SDKs.

An application interacts with data sources through the LoopBack model API, available locally within Node, remotely over REST, and via native client APIs for iOS, Android, and HTML5. Using the API, apps can query databases, store data, upload files, send emails, create push notifications, register users, and perform other actions provided by data sources.

Mobile clients can call LoopBack server APIs directly using Strong Remoting, a pluggable transport layer that enables you to provide backend APIs over REST, WebSockets, and other transports.

LoopBack supports both “dynamic” schema-less models and “static”, schema-driven models. A LoopBack model represents data in backend systems such as databases and REST APIs, and provides APIs for working with it.  If your app connects to a relational database, then a model can be a table. Additionally, a model is updated with validation rules and business logic.

Creating Models

creating models ADN_animation

 

You can create LoopBack models in various ways, depending on what kind of data source the model is based on.  You can create:

The REST API enables HTTP clients like web browsers, JS programs, mobile SDKs, curl scripts or any other compatible client to interact with LoopBack models. LoopBack automatically binds a model to a list of HTTP endpoints that provide REST APIs for model instance data manipulations (CRUD) and other remote operations.

By default, the REST APIs are mounted to /<Model.settings.plural | pluralized(Model.modelName)>, for example, /locations, to the base URL such as http://localhost:3000/.  LoopBack provides a number of built-in models that have REST APIs.  See the REST API docs for more information.

Predefined remote methods

By default, for a model backed by a datasource, LoopBack exposes a REST API that provides all the standard create, read, update, and delete (CRUD) operations. For example, for a model called Location (that provides business locations), LoopBack automatically creates the following REST API endpoints:

Model API

HTTP Method

Example Path

create()

POST

/locations

upsert()

PUT

/locations

exists()

GET

/locations/:id/exists

findById()

GET

/locations/:id

find()

GET

/locations

findOne()

GET

/locations/findOne

deleteById()

DELETE

/locations/:id

count()

GET

/locations/count

prototype.updateAttributes()

PUT

/locations/:id

Relationship Mapping

Individual models are easy to understand and work with. But in reality, models are often connected or related.  When you build a real-world application with multiple models, you’ll typically need to define relations between models. For example:

  • A customer has many orders and each order is owned by a customer.
  • A user can be assigned to one or more roles and a role can have zero or more users.
  • A physician takes care of many patients through appointments. A patient can see many physicians too.

With connected models, LoopBack exposes as a set of APIs to interact with each of the model instances and query and filter the information based on the client’s needs. You can define the following relations between models:

  • belongsTo
  • hasMany
  • hasManyThrough
  • hasAndBelongsToMany

You can define models relations in JSON or in JavaScript code.  When you define a relation for a model, LoopBack adds a set of methods to the model:

belongsTo

A belongsTo relation sets up a one-to-one connection with another model, such that each instance of the declaring model “belongs to” one instance of the other model. For example, if your app includes customers and orders, and each order can be placed by exactly one customer.

belongs-to.png-version=1&modificationDate=1384386009000&api=v2

The declaring model (Order) has a foreign key property that references the primary key property of the target model (Customer). If a primary key is not present, LoopBack will auto add one.

hasMany

A hasMany relation builds a one-to-many connection with another model. You’ll often find this relation on the “other side” of a belongsTo relation. Here each instance of the model has zero or more instances of another model.  For example, in an app with customers and orders, a customer can have many orders.

has-many.png-version=1&modificationDate=1384386032000&api=v2

The target model, Order, has a property, customerId, as the foreign key to reference the declaring model (Customer) primary key id.

hasManyThrough

A hasMany through relation is often used to set up a many-to-many connection with another model. Here the declaring model can be matched with zero or more instances of another model by proceeding through a third model. For example, in an app for a medical practice where patients make appointments to see physicians, the relevant relation declarations might be:

has-many-through.png-version=1&modificationDate=1384386049000&api=v2

The “through” model, Appointment, has two foreign key properties, physicianId and patientId, that reference the primary keys in the declaring model, Physician, and the target model, Patient.

hasAndBelongsToMany

A hasAndBelongsToMany relation creates a direct many-to-many connection with another model, with no intervening model. For example, in an app with assemblies and parts, where each assembly has many parts and each part appears in many assemblies, you could declare the models as:

has-and-belongs-to-many.png-version=1&modificationDate=1384386077000&api=v2

You can define a model declaratively in models.json and you can programmatically define and extend models using the Model class.

Data sources and connectors

datasource-connector.png-version=1&modificationDate=1384303105000&api=v2

 

LoopBack is centered around models that represent data and behaviors. Data sources enable exchange of data between models and backend systems such as databases. Data sources typically provide create, retrieve, update, and delete (CRUD) functions. LoopBack also generalizes other backend services, such as REST APIs, SOAP web services, and storage services, as data sources.

Data sources are backed by connectors which implement the data exchange logic using database drivers or other client APIs. Connectors are not used directly by application code. The DataSource class provides APIs to configure the underlying connector and exposes functions via DataSource or model classes.

The diagram illustrates the relationship between LoopBack Model, DataSource, and Connector.

  • Define the model using LoopBack Definition Language (LDL). This provides a model definition in JSON or as a JavaScript object.

  • Create an instance of ModelBuilder or DataSource which extends from ModelBuilder. ModelBuilder compiles model definitions to JavaScript constructors of model classes.DataSource inherits that function from ModelBuilder. In addition, DataSource adds behaviors to model classes by mixing in methods from the DataAccessObject (DAO) into the model class.

  • Use ModelBuilder or DataSource to build a JavaScript constructor (i.e, the model class) from the model definition. Model classes built from ModelBuilder can be later attached to a DataSource to receive the mixin of data access functions.

  • As part of step 2, DataSource initializes the underlying Connector to provides configurations to the connector instance.Connector works with DataSource to define the functions as DataAccessObject to be mixed into the model class. The DataAccessObject consists of a list of static & prototype methods.

  • LoopBack DataSource object

    DataSource object is the unified interface for LoopBack applications to integrate with backend systems. It’s a factory for data access logic around model classes. With the ability to plug in various connectors, DataSource provides the necessary abstraction to interact with databases or services to decouple the business logic from plumbing technologies.

    Depicted below is the end to end Loopback API framework architecture for model driven development.

    SL_MicroServices

    To checkout more details please visit Loopback.io or Strongloop website. In our next weekly blog we will deep-dive into the API services architecture of Loopback.

    What’s next?

  • Read about eight exciting new upcoming Node v0.12 features and how to make the most of them, from the authors themselves.
  • Need training and certificationfor Node? Learn more about both the private and open options StrongLoop offers.