Evernote Tech Blog

The Care and Feeding of Elephants

Streamlining The Evernote Cloud SDK for iOS


In this post we provide an overview of the rebuilt and streamlined Evernote Cloud SDK for iOS. You can try it out for yourself at: github.com/evernote/evernote-cloud-sdk-ios

We’ve always been proud to offer the full sophistication of the Evernote platform to all developers to build on. You have access to the same complete API that our own client apps are built on, so you can do anything we can.  What that has too often meant, though, is that you’ve had to understand a lot of that sophistication before accomplishing basic tasks. The more complex technology— for example shared notebooks and Evernote Business notebooks — has been challenging to support without expertise and a fair bit of code.

We’re starting to change that. The developer team here has been working hard for the last few months rethinking our client SDK experience, simplifying and streamlining common workflows. We’re debuting this work with an all-new iOS SDK, which is now live as a new beta version here: github.com/evernote. Want to just turn some text or an image into a note? Maybe just offer a one-stop Save To Evernote button? We’ve made these things really easy. Also simple support for searching for and downloading notes and thumbnails. We expect these features —plus some other goodies outlined below— will cover the vast majority of the ways apps want to integrate with Evernote. For anything not covered, the full API is always available.

Here are some of the notable improvements you’ll find in the new Evernote Cloud SDK for iOS:

  • Professionally-designed “share sheet” UI for “Save To Evernote”. This is the simplest integration and requires almost no work on your part. Includes all source so you can modify or learn.
  • Easy-to-use functions for creation, updating, sharing, search, and download of notes.
  • One-liner attachment of images or other resources to notes.
  • Ability to capture basic HTML content from a web view and generate a note from it.
  • Automatic support for shared and business notebooks. This is completely transparent to the developer.
  • Automatic support for the upcoming “App Notebook” style of authorization.
  • Simple conversion of downloaded notes into data prepared for display in a UIWebView.
  • Significantly reduced code and binary SDK size.
  • An “advanced” set of headers that let you access the full traditional (EDAM) API directly if you need it for more complicated tasks.
  • A build script that will create a bundled framework that you can drop into your project.

We believe that Evernote and your app are better with each other. We also believe that you have many important things to spend your limited development time on. The new Evernote Cloud SDK for iOS is a step towards quicker, reliable integrations that result in better experiences for your users. Give the new SDK a spin, and let us know how it works for you!

ben zottoAbout Ben

Ben Zotto is a product and engineering lead at Evernote. He created the Penultimate handwriting app for iPad, and is currently working on the redesigned Evernote SDK for iOS, among other projects.

Ben on Stack Overflow

Leave a comment

[RSVP] Evernote Dev Party @ WWDC


The Evernote Dev team is hosting our 3rd annual iOS meet up the first night of Apple’s Developer Conference, WWDC in San Francisco. We’ll be right across the street at the Thirsty Bear Pub!

Tickets are limited so RSVP while the tickets are still available: Eventbrite

We look forward to connecting in person, answering questions about our API, the Evernote Platform Awards, and the Evernote App Center.

Monday, June 2nd, 2014
6pm – 9pm
Thirsty Bear Brewing Co.
661 Howard St, San Francisco
California 94105

We are revealing next week a brand new SDK for building apps using Evernote SDK for iOS. We provide new tools for quickly passing content into Evernote with ease. You can try out the new SDK here: github.com/evernote/evernote-cloud-sdk-ios

Leave a comment

Feedback Needed for New API Feature: App Notebooks


We’re pleased to announce the beta release for our most-requested API feature: Single Notebook Authorization — We’re calling this feature “App Notebooks”. Beginning this summer, third-party developers can connect their application exclusively to a single notebook within a given user’s Evernote account.

Read the new documentation here: App Notebooks API docs

This benefits both our users and developers:

  • Users can choose the scope for which notebook an app can read
  • Developers choose whether their application fits in this model
  • User data is kept secure for business and personal notebooks
  • Server-side optimizations allow for faster API access

How does App Notebook work?

When a user is prompted to authorize an application to access his/her Evernote account, the user will have a few options:

  • Create a new notebook for the application to use; this is the default behavior; the notebook will have the same name as the application
  • Create a new notebook with a custom name
  • Select an existing notebook from their account to be used by the application

The good news is, App Notebooks will require very little in terms of code modifications — most of the magic happens on the server during the OAuth process.

We need your help!

This feature is about to enter private beta testing and we’re looking for a small group of partners to help us work out the kinks. If you’re interested, please let us know by filling out this form. All partners chosen for the beta will receive a complimentary year of Evernote Premium.


“What if my app needs access to a user’s entire Evernote account to function properly?”

During the API creation process, you’ll have the opportunity to request full account access for your API key.

“What about existing API keys?”

All existing API keys will retain their current permission level.

If you have any questions, please post them in the comments below or in our developer forum and we’ll answer them as quickly as possible. Also, documentation for this feature will be provided to testers during the beta period and published to the Evernote Developer site when App Notebooks go live this summer.

Example Flow





Leave a comment

Automated builds at Evernote

With more than 100 developers spread out among several countries, a robust continuous integration and build automation practice is crucial to Evernote’s development process. Through automated builds and tests, we can head off problems as soon as they are introduced into our code, resulting in a more efficient process and a reduced chance of impacting our users. Our overall strategy is based on the underlying principles from Martin Fowler’s 2006 article on Continuous Integration.

The Engineering Services team at Evernote operates a Jenkins master server and about 30 slave machines for building and testing. The slaves run a variety of operating systems (Windows, OS-X, and Linux), SDKs, build tools (Xcode, Android SDK, Visual Studio, Maven, etc), and test suites. Standard practice is for each build job to run on at least two slaves, so that if one is offline for any reason the other machine will seamlessly handle the build.


Our typical build-test-release workflow includes a verification build every time code is committed, a daily build for internal testers, and public builds (both beta and final release) that, in the case of our client applications, trigger an automatic deployment from within the client app.

In our Jenkins environment we build not only our Java-based web service, but also various applications for iOS, Android, Windows, OS-X, Windows Phone, Pebble and Blackberry. Every team is a little different, but the Evernote for Android client app provides a useful example of how we leverage automation.

  1. Every time an Android developer commits code to the main branch, a build is triggered. In the case of the Android client app this automated build is an internal test version that’s automatically deployed, so it’s important to make sure these builds pass basic usability testing.
  2. A successful build job triggers a build verification test (BVT), which consists of automated functional tests running on Android devices.
  3. A successful BVT triggers a deployment of the client app by pushing out a xml file to a location that is monitored by the clients.
  4. When the clients detect the new file, they immediately download and install the new version.

Our QA team receives emails when release candidate versions are ready for testing and sign-off, and these executables are readily available for download via Jenkins. The Android team publishes public beta and release versions through the Google Play store. All released versions of our apps are built and signed in our Jenkins environment.

Build System

For the Java-based Evernote web service, the workflow is relatively similar to the above. Developers commit code to the web service component projects, and those commits trigger verification builds that incorporate a series of tests. Successful builds trigger an automatic deployment to our staging environment, where QA engineers perform tests against the new code. We do a public release of our web service almost every Wednesday, at which time the build that’s been approved by QA is deployed by our operations team.

For our Maven-based jobs, we manage dependencies across projects using Artifactory, an open source tool that tightly controls how and where build artifacts are used in our environment.

Jenkins offers about 900 open source plugins of which we use nearly 75. Some of the most important plugins for our environment include:

  • Git, Maven, Gerrit Trigger
  • Email Extension
  • S3 Publisher
  • Parameterized Trigger
  • Token Macro
  • Jira

A follow-up blog post will cover some interesting parts of our web service production deployment process.

Tagged , , , | 5 Comments

The 2014 Evernote Platform Awards


Every year we highlight the community of startups, developers, and designers building amazing products with the Evernote API. It’s time to kick off the 2014 Evernote Platform Awards and we want you to take part in the season!

What is it?
Our annual awards celebrating the best apps that connect to Evernote.

Who is this for?

  • Users – Nominate your favorite apps that connect to Evernote
  • Developers – build great apps that utilize the Evernote API
  • Existing Apps – add “Save to Evernote” to your project for consideration

What we look for?

  • Design: Is the application polished, visually appealing, and easy to use?
  • Development: Does the application incorporate the Evernote API?
  • Utility: Is the application compelling and/or indispensable?
  • Originality: Is the application unique and/or innovative among the competition?

Key Dates:

  • May 6th - Nomination are accepted for the Evernote Platform Awards
  • July 31st - Nominations close; Evernote judges select final nominees
  • August 4th - Top finalists are announced and our community votes their favorites
  • September 2nd - Evernote Platform Award winners announced

What are the Award categories?
Popular use cases for Evernote:

  • News
  • Productivity
  • Food
  • Education
  • Business
  • Lifestyle

Acknowledging the top teams and their efforts:

  • Best Design
  • Best Multi-Platform App
  • Best New Startup
  • Best Yinxiang Biji App
  • Best Integration of 2014

How to get involved?

For users: Explore the Evernote app center, our collection of great 3rd party Evernote apps, and nominate your favorite apps with the banner on each product page


For developers: Nominate your app on the Evernote Platform Awards by going to the platform awards site and filling out the form below.



We look forward to sharing great apps with our community of users!

Get Started:

Tagged , , , | 1 Comment

Dashboarding with Open Source Tools

As a member of the Analytics team at Evernote, my working life revolves entirely around data. Evernote saves and maintains a vast and growing amount of data on the usage of our service. One of the biggest challenges that we face is presenting that data to our colleagues and key decision-makers in a comprehensible, useful manner. The Business Analytics team at Evernote has been using Jaspersoft technology to present this data in a multitude of formats, but all entirely static, not interactive renderings. We’ve recently been using Tableau for our variable, dynamic dashboarding needs, as this software allows the user to switch the measures, dimensions, and time frame to suit their individual purposes. But what if we could create a dynamic, interactive dashboard using only free, open source software? When Evernote decided to launch Market, Evernote’s curated e-commerce site for physical goods, we decided to give it a try.

We started by narrowing down exactly what we would be collecting and what would be most effective for the Market team to know. We ended up with a series of measures – bookings, units, orders, cart visitors, cart conversion, and average order value – and a series of dimensions – request country, shipping country, client used to place order, item and item group – that the user can select in order to change what they view. To add an additional element of complexity, we also have multiple ‘stores’ that a customer can access, depending on their geographic location. The ‘store’ is the localized webpage. When we first launched this project, there were only three – United States, Canada, and Japan – and the Market team requested to be able to view the same data either globally or filtered by store.

First Iteration

Though intrigued by the D3 (d3js.org) visualization library, our first iteration, and my first foray into web development, was a pretty basic design. Rather than immediately launch into a complicated and javascript-heavy project, we opted for a simpler venture using an iframe and some basic CSS. The iframe points to a url associated with the variables chosen from a drop down menu, which displays a static Jaspersoft report. When a different variable is selected, the iframe url updates by substituting the new variable for the previous one.


Figure A: A screenshot of the original dashboard using an iframe and static Jaspersoft reports.

As simple and effective as this method was, it still meant creating, scheduling, and maintaining as many reports as permutations of available measures and dimensions and stores. When the initial phase of this venture was complete, we had added almost one hundred and fifty reports and processing time to our regular daily schedule and runtime. Additionally, we were getting complaints that the graphics were too small to make out, which would require extra effort to customize. Once we got word from the Market team that they would be adding an additional dozen stores, we decided it was time to try using D3 rather than attempt to support a further five hundred reports in our daily reporting schedule.

The Second Generation

Once we switched tactics to start on a dashboard using D3, the first thing we needed was a data source. I created a query that gave us each measure and dimension, with a row for each item of each order. Everyday, Jaspersoft generates the CSV that the D3 dashboard uses. When the csv is read into the page, we call a javascript library called Pivot.js (Pivot.js) that creates a pivot table by aggregating the appropriate measure in reference to the dimension selected. This creates a table that is considerably easier to read than a standard table with a separate row for each day and category. In this first iteration, once the CSV for the table was read and processed, the same CSV was read and processed again in order to create the graph. We used D3 library functions to read the fields and ‘nest’ the data so that it can be rendered as we see it shown in the table.

Figure B: A screenshot from the dashboard showing the pivot table, and the graph that renders the table output. Data has been changed to protect confidentiality.

The table defaults to the previous seven days, but the addition of scrolling arrows allows the user to select the specific seven day period. There are also buttons for date parts Day, Month and Year, which the user can select in order to see the data aggregated in the table by that date part. The graph continues to display the data by day, but expands to show all days rather than a seven day window. When a date part is selected, the script compares the part selected with the parsed order date in order to aggregate the measure by the selected time frame. For example, when year is selected, Pivot uses Jquery to parse the year of the order date field, and renders the table by aggregating the measure over the year rather than the date. The scroll buttons worked by adding a variable to the function that created the table, called reprocess_display. This variable, called offset, tells the function how many days forward or backward to add to the first date displayed in the table. For example, when the double arrow back is pressed, the table is reprocessed to display the seven days prior to the default display. Click it again, and it now shows data for two weeks prior.

While this worked well enough, the page took about sixteen seconds to load, and pulling in the same large data file twice was completely redundant. We decided to make a few changes to optimize the user experience.

Continued Enhancements

There were several improvements we knew we would have to implement to make this dashboard a pivotal tool. For one thing, it was difficult to interpret graphs for dimensions with more than four or five categories, as every category was rendered in the graph as its own line. Another common request was the ability to export the data displayed in the table. We also needed to cut the loading time about in half.

The first way that we combatted the elongated loading time was to use the output of the pivot table to render the graph. Finding the correct API call took some experimenting, but using pivot.data().all gave us a group of arrays of the original data. Once this data went through the D3 library nesting process used with the csv directly, the output was the same and the process was much faster. Since there is now only one csv to read and process, we shaved several seconds off of the loading time. We also minified and gzipped all our javascript and css resources, and created an htaccess file that provided far future expiration dates for all resources. This certainly made some improvements, however the main problem was still the size of data being imported, and that was only going to increase with time. We decided to create two separate data sources, one with only the last eight weeks and another with the full history. The default setting imports the smaller csv, but the user can select “All Time” under the settings tab in order to see the data in its entirety. The page now loads in three seconds, an improvement of greater than 80%.

secondD3ImageFigure C: A screenshot of some of the enhancements now available using D3. Data has been changed to protect confidentiality. 

Searching through forums online to find an export to excel function, it seems like there are dozens of options. After trying several with no luck, I decided on a slightly different tactic. Rather than attempt to export the raw arrays, I would parse the table itself and create a csv to be exported. This involved using Jquery to find table elements and table delimiters, parse them apart, and piece them back together as individual elements with comma separators. Once this matrix-like variable is built, it can be downloaded by pointing the browser to the appropriate href, created using ‘data:application/csv;’.

The default table created with Pivot does not include column or row totals, which was a problem for many people using the dashboard who wanted a basic update at a glance. Knowing that Pivot is based heavily on DataTables (DataTables.js), another javascript library, I located the “fnFooterCallback” option that would allow us to sum the elements in each column and place that aggregation in the footer row. At first, I got the error that the row didn’t exist, and realized that there was no footer row into which the function should insert the results. After using Jquery to create the footer row and adding some formatting to the output, the summing function worked beautifully.

The last piece of the puzzle, for round two, was adding a highlight feature that linked the graph’s lines to the corresponding legend object. By adding data-key attribute set to the selected dimension, the line, legend square, and legend text were effectively linked. Using Jquery hover and click functions, combined with some CSS, we added a few features to make the graph easier to digest. When hovering over any element of the graph or legend, the corresponding elements also highlight, as seen in figure C. When a legend square is clicked, that line alone appears in the graph, with the others temporarily invisible. Hovering over the other elements brings them back to the graph. In the next iteration, clicking on the legend square will not only highlight that single element, but will rescale the graph in order to zoom in on that individual line.

Coming Attractions

In the next iteration of this dashboard, we hope to continue to add useful features and enable decision making surrounding Market. The first goal is to whittle down the amount of time it takes to load and process the page. By creating calculated fields using Jquery rather than including them as separate fields in the csv, we can cut down on valuable loading and processing time and space. The next major feature will be the ability to filter the graph in addition to the table. This would require a closer link between the table and the graph. Since we’re already using the output from the pivot table to create the chart, either we can find the appropriate call to the pivot table, or we can use the table as an HTML element in order and render the graph off of that. This should not only filter the lines and items shown in the graph, but should also rescale the graph in order to zoom in on the targeted item. Currently, the drawGraph() function requires completely reloading the entire page, so this will also have to be reworked in order to accept new input more flexibly. With this experience under our belts, we can confidently predict that we will use D3 in the future for additional projects.

 About Maggie


Maggie Soderholm is a Data Analyst on the Analytics team at Evernote.  She works to discover insights into user behavior and conversion, in addition to her work with D3 visualizations.



Tagged , , | 3 Comments

Wearables Update: Evernote for Pebble Now with Cyrillic and Image Support (Really!)


The following is a behind the scenes walkthrough on our latest updates for Evernote for Pebble by our lead wearables engineer, Damian Mehers. For our announcement of Evernote for Pebble, click here.

Since the initial release of the Evernote app for the Pebble we’ve made several improvements.  I’m going to briefly talk about what’s changed, and then dive into technical details about how I implemented two of the more challenging changes: supporting custom fonts and generating bitmaps from JavaScript.

First we will review the straightforward improvements:

Preserving line-breaks in notes

If you used the initial release of Evernote for Pebble, you may have noticed that some of your notes looked a little garbled.  This was because the app was not preserving line-breaks in your notes:

Screen Shot 2014-04-16 at 18.23.53 pebble-screenshot_2014-04-16_18-18-39

With the latest release, we’ve address this and it should be much better now:


Note font size

Soon after we launched Evernote for Pebble, we received helpful feedback.  Thankfully, some people have much better eye-sight than I do.  We added a setting for those keen-eyed people that can read smaller fonts:

pebble-screenshot_2014-04-17_13-22-47 pebble-screenshot_2014-04-16_18-29-34 pebble-screenshot_2014-04-16_18-29-40 pebble-screenshot_2014-04-16_18-29-24 pebble-screenshot_2014-04-16_18-29-47 pebble-screenshot_2014-04-16_18-29-55

Reminder order

Some people use Evernote reminders to “pin” their most important notes to the top of a notebook, and some people use them as actual reminders.

For the “pinned important notes” use case, it makes sense to list the reminders in the order in which they are pinned on the desktop. For the “Notes as actual reminders” use case it makes more sense to list them in the order that each reminder is set to go off.

With the latest release, you choose the order, just like you can on Evernote for Mac.  The default is to order them based on the order in which they are pinned, but if you use reminders as actual reminders, you can see them by reminder date instead:

pebble-screenshot_2014-04-17_13-22-47 pebble-screenshot_2014-04-16_18-35-36 pebble-screenshot_2014-04-17_17-56-04

Custom fonts & Cyrillic Support

The Pebble doesn’t support fonts with accents on them, which is why the initial release of Evernote didn’t support such fonts either.   You’ll have noticed that when you receive a text message with an emoji in it, you just get little squares in the notification on your Pebble.  The same thing happens for notes with cyrillic characters.  Look what happens to the note’s title below:

Screen Shot 2014-04-16 at 18.39.58


Soon after the initial release we received a deluge of requests for support for PebbleBits which is an unofficial way in which the Pebble can be made to support additional fonts.  Essentially you generate, download and install custom firmware which includes additional fonts.

In order to add support for this in the Evernote JavaScript companion app that runs on the phone, so that it can send the correct strings over to Evernote app running on the Pebble, I needed to convert the UTF-16 strings containing note text over to the equivalent UTF-8 bytes to send over to the Pebble.


This is the JavaScript code which pushes the UTF-8 bytes into a byte array buffer from a UTF-16 String:

// From http://stackoverflow.com/a/18729536/3390
function addUTF8StringToBuffer(buffer, text) {
  var utf8 = unescape(encodeURIComponent(text));
  for (var i = 0; i < utf8.length; i++) {


You’ll see that I am NUL terminating the string for consumption by the C code on the Pebble.

Once you’ve generated and installed custom firmware with the appropriate character set(s) you’ll see the note titles and content displayed properly:


Note bitmaps – Images on the Pebble!

Previously when you viewed a note in Evernote for Pebble, the app looked to see if there was any text in the note content, and if there was it displayed the text, otherwise it displayed this message:


Wouldn’t it be cool, I thought, if we could display an image on the Pebble if the note contains an image?  That way, when I’m shopping for yogurt, and I’m faced with an overwhelming array of dairy produce, I could glance at my watch to remind myself of what it was I was supposed to buy, assuming I’d previously captured a photo of the yoghurt in Evernote on my phone:

Screen Shot 2014-04-17 at 12.34.03

There were a couple of challenges to overcome.  The Pebble has a black-and-white display, and it requires its bitmaps to be in a specific format with a specific header.  On the other hand you can store all kinds of images in Evernote, including PNGs, JPGs, BMPs etc.  And they can have all kinds of color depths.

Using the Evernote Thumbnail API

Fortunately there is an Evernote Thumbnail API, which as a developer you can use to request a thumbnail for a specific note or resource.  What is more, you can request a specific thumbnail size, and the icing on the cake is that you can request that the thumbnail be in a specific format, such as a bitmap.

This was perfect.  In my JavaScript I request the thumbnail using code like this:

  var xmlhttp = new XMLHttpRequest();
  var url = pbnote.webUrl + '/thm/note/' + noteGuid + '.bmp?size='
            + pbnote.CONSTS.THUMBNAIL_SIZE;
  xmlhttp.open("GET", url, true);
  xmlhttp.setRequestHeader('Auth', pbnote.authTokenEvernote);
  xmlhttp.responseType = 'arraybuffer';
  xmlhttp.onload = function () {
    if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {
      pbnote.sender.convertToPebbleBitmap(xmlhttp.response, onOk, onError);
    } else {
      pbnote.log.e('fetchBitmap xmlhttp.readyState=' + xmlhttp.readyState + 
                   ' and xmlhttp.status=' + xmlhttp.status);

The size constant is 140.  I’m requesting bitmaps by adding ‘.bmp’ to the URL.

This is all very well, and works flawlessly, but what I get back is a color bitmap as a byte array, and what I need to send to the Pebble is a black-and-white bitmap.

Generating the Pebble Bitmap Header (from JavaScript)

I needed to parse the bitmap, and for each pixel convert the Red/Green/Blue components to a black or white pixel, and add the corresponding header.  From JavaScript.

I went searching on the web to see if anyone else had already solved this problem.  No one had, but the good news was that someone had written a library in JavaScript that knew how to parse bitmaps and get the RGB values for each pixel.  The bad news was that they had written the library for Node.js, which uses a special Buffer type for binary data, rather than Typed Arrays that are used in web browsers and the Pebble.

No problem, I thought.  I’ll just write my own Buffer class which wraps a typed array and exposes all the methods that are needed by the library.  That worked, and I was able to parse the bitmap that I received back from the Evernote service, and get the RGB components for each pixel.

All that was left was to generate the appropriate header that the Pebble expects:

  // Buffer is a our own implementation of the bare minumim Node.js Buffer class' 
  // functionality since the bitmap class relies on it
  var buffer = new Buffer(bytes);
  var bitmap = new Bitmap(buffer);

  var width = bitmap.getWidth();
  var height = bitmap.getHeight();

  var data = bitmap.getData(true); // true means it returns 'rgb' objects

  // Calculate the number of bytes per row, one bit per pixel, padded to 4 bytes
  var rowSizePaddedWords = Math.floor((width + 31) / 32);
  var widthBytes = rowSizePaddedWords * 4;

  var flags = 1 << 12; // The version number is at bit 12.  Version is 1
  var result = [];  // Array of bytes that we produce
  pushUInt16(result, widthBytes); // row_size_bytes
  pushUInt16(result, flags); // info_flags
  pushUInt16(result, 0); // bounds.origin.x
  pushUInt16(result, 0); // bounds.origin.y
  pushUInt16(result, width); // bounds.size.w
  pushUInt16(result, height); // bounds.size.h

Generating a Pebble black and white bitmap from a color bitmap (in JavaScript)

Now that I had the header generated (I stole some techniques from a Python tool in the Pebble SDK), I walked through the bitmap’s pixels, converting each pixel to black-and-white:

var currentInt = 0;
for (var i = 0; i < width; i++) {
  var bit = 0;
  var row = data[i];
  for (var b = 0; b < row.length; b++) {
    var rgb = row[b];
    // I'm using the lightness method per
    // http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
    var isBlack = (Math.max(rgb.r, rgb.g, rgb.b) + Math.min(rgb.r, rgb.g, rgb.b))/ 2 
                   < pbnote.CONSTS.BLACK_CUTOFF;

    // This is the luminosity method, which doesn't seem to give as good results
    //var isBlack = (0.21 * rgb.r + 0.71 * rgb.g + 0.07 * rgb.b) 
    //               < pbnote.CONSTS.BLACK_CUTOFF;
    if (!isBlack) {
      currentInt |= (1 << bit);

    bit += 1;
    if (bit == 32) {
      bit = 0;
      pushUInt32(result, currentInt);
      currentInt = 0;
  if (bit > 0) {
    pushUInt32(result, currentInt);

The code on the Pebble side does nothing more than load the bitmap using gbitmap_create_with_data and display it.

Amazingly enough, images rendered on Evernote for Pebble work:

pebble-screenshot_2014-04-17_12-37-10 pebble-screenshot_2014-04-17_12-37-06

I’m using the Evernote Thumbnail API to download a bitmap, converting it to a black-and-white bitmap, adding the appropriate Pebble bitmap header and sending it over to the Pebble for display.


It might be worth looking at compressing the bitmap prior to sending over to the Pebble, but I’d need to balance reduced battery consumption on the Pebble due to less Bluetooth messages being sent, against the increased battery consumption on the Pebble due to increased CPU usage to decompress the image.  It would also be smart to look at dithering the image to replicate grey-scale.  If anyone has a library to do that in JavaScript, let me know!

About Damian

DamianPortait10PercentDamian Mehers is a Senior Software Engineer at Evernote, currently focused on Evernote and wearable devices.  Damian created  Evernote for the Pebble and the Samsung Galaxy Gear.  He also worked on Evernote Food for Android, and created the initial release of Evernote for Windows Phone.

@Damian Mehers damian@evernote.com

Tagged , , , | Leave a comment

Indexing Handwritten Images: From Latin to CJK

With the recent addition of Chinese support, Evernote Recognition System (ENRS) indexes handwritten notes in 24 languages. Each time we add another language, we have to overcome new challenges specific to the particular alphabet and style of writing.

Yet another batch came once we approached the CJK group of languages — Chinese, Japanese and Korean. These languages require support for two orders of magnitude more symbols, each being vastly more complex. Writing does not require spaces between words. Fast scribbling shifts to cursive, making interpretation rely heavily on context.

Before going into specifics of CJK support, let’s first look at the challenges that need to be addressed for Latin script recognition. For our engine, the first step to parsing a handwritten page is finding lines of text. This already could be a non-trivial task. Let’s take a look at an example:

Screen Shot 2014-04-09 at 2.52.39 PM

Lines could be curved. Letters from different lines cross each other and the distance between lines varies randomly. The line segmentation algorithm has to follow the lines, untangling the accidental connections as it goes.

The next challenge comes once the lines are extracted — how to split them to words. This task is mostly simple for printed texts, where there is a clear difference in distance between letters and words. With handwriting, in many cases it is not possible to tell just by distance whether it is a symbol or a word break:

Screen Shot 2014-04-14 at 9.32.56 AM

What could be helpful here is understanding what is written. Then, by understanding the words, you can tell where each begins and ends. But, this requires the ability to recognize the line as a whole, not just reading word after word — the way most regular OCR engines operate. Even for European languages, the task of recognizing handwriting turns out to be not that different from the challenges of processing CJK texts. To illustrate, here is an example of a Korean handwriting:

Screen Shot 2014-04-14 at 9.33.05 AM

Each line’s flow needs to be traced similarly, with possible overlaps untangled. After a line is singled out, there is no way to even attempt a space-based word segmentation. As with European handwriting, the solution would be to do recognition and segmentation in a single algorithm, using the understanding of recognized words to decide where the word boundaries are to be found.

Now, let’s look at the steps for the process of symbols interpretation. It first estimates where individual characters could begin. These would be smaller whitespaces between strokes and specific connecting elements, characteristic of cursive handwriting. We will have to ‘oversegment’ here, placing extra division points — at this point we have no clear idea if a segmentation point is correctly placed outside of symbol boundaries, or falls inside it:

Screen Shot 2014-04-14 at 4.49.58 PM

To assemble the actual symbols, we will try to combine these smaller parts into bigger blocks, estimating every combination. The next image illustrates an attempt to recognize the combination of the first two blocks:

Screen Shot 2014-04-14 at 4.51.28 PM

Of course, this means that we will have to recognize many more symbol variants than there are actual characters written. And for CJK languages, this in turn means that the recognition process becomes much slower than it is for Latin languages, as estimating different combinations is multiplied by so many more symbols to consider. The core of our symbol recognizer is a set of SVM (“Support Vector Machine”) decision engines, each solving the problem of recognizing its assigned symbol ‘against all the rest.’

If we need to have about 50 such engines for English (all Latin letters + symbols), in order to support the most common Chinese symbols, we would need 3,750 of them! This would’ve been 75 times slower, unless we devised a way to run only a fraction of all these decision engines each time.

Our solution here is to first employ a set of simpler and faster SVMs, which would pre-select a group of similarly written symbols for a given input. Such an approach usually allows us to net only five to six percent of the whole set of characters, thus speeding up the overall recognition process about 20 times.

To decide which variants of the multiple possible interpretation of the symbols of handwriting should be selected for the final answer, we now need to refer to different language models — context that would allow us to create the most sensible interpretation of all the possible symbol variants generated by the SVMs. Interpretation starts with simply weighing up the most common two-symbol combinations, then raising the context level to frequent three-symbol sequences, up to dictionary words and known structured patterns — like dates, phones, and emails. Next comes probable combinations of the proposed words and patterns set together. And at no point in the process before you weigh all the possibilities in depth, can you tell for sure what is the best way to interpret the writing of that line. Only evaluating millions and millions of possible combinations together, similar to how “Deep Blue“ was analyzing myriad of chess positions playing against Kasparov, is it possible to come up to the optimal interpretation.

Once the best interpretation for the line is established, it finally can define the word segmentation. Overlaid green frames on the images below show the best segmentation to words the system could devise:

Screen Shot 2014-04-14 at 5.01.31 PM

Screen Shot 2014-04-14 at 5.02.00 PM

And, as you can see, the process turned out to be mostly the same for both, European and CJK handwriting!


In depth: Descriptive Search in Evernote

The following is a behind the scenes walkthrough of Descriptive Search within Evernote by our Augmented Intelligence Engineer, Adam Walz. For our public announcement of Descriptive Search, click here.

Search Box Evernote

Evernote has always had a great keyword search experience being able to surface notes that match not only in the body of the note but also within images and documents. However, when confronted with a blinking cursor in the search bar we often find ourselves struggling to remember what exactly we named that particular note. We realize that while keywords are an integral part of the search experience, we as humans have a natural tendency to relive our memories by the places we’ve been, the dates we created our notes or even the types of files the note contains. How often do you wish you could just search by typing San Francisco last week ? Now, with Descriptive Search for Mac you can!

Screen Shot 2014-04-08 at 1.43.44 PM

Realizing that search needs to evolve from keywords, Descriptive Search was our attempt to create a natural extension of your thought process. Because this is an ambitious attempt, we decided to start with support for the English language on the Mac client. In forthcoming releases, we will expand to additional languages and platforms.


Tearing Open The Seams

Evernote has always had an advanced Search Grammar which also allows you to search the metadata associated with your notes and resources. However, using it requires a syntax that is difficult to remember, and far from intuitive.

For example, a search for all your notes that contain all ‘office documents’ would entail typing the query:


Now that’s a query that I wouldn’t expect anyone to remember or want to type! Descriptive Search allows you to do this by simply typing ‘office documents’.

Our implementation goal for the first version of Descriptive Search was to support a large subset of  what our Search Grammar supports by adding a natural language query interface so you can use everyday language to find notes the way you remember them.

At an implementation level, this comes down to a classic Parsing problem. Parsing is the process of analyzing the syntax of a piece of text to pull out the important parts and understand the meaning behind the text.  When we have to deal with the nuances of multiple languages, dialects, and an ever expanding dictionary things get interesting.

Natural languages are characterized by their lack of a strict grammar. You could write the same search many different ways, and Evernote should be able to understand that any number of queries could have the same meaning. In the world of natural language processing this is called a semantic grammar. Our semantic grammar is specifically created to pull out the meaning in your query, and throw away anything that is not helpful in finding your notes.


This all comes together in the following sequence of steps:

1.  Preprocessing Step

Like most natural language systems, our goal in the preprocessing step is to sanitize the users input query to remove irrelevant language nuances. This begins by detecting the words in the users input query.  While this might seem like a trivial task, finding what constitutes a word for languages like Chinese and Japanese gets into difficult areas of natural language processing. This becomes one of the most critical steps to get right. Since we did not want multiple conjugations of a phrase to cause parsing to fail, we then remove common words from the input language (stop word detection and removal) and condense the words down to their root word (stemming).

Like everything else we do at evernote, we also wanted to make your job easier, so we expand your query using our Type-Ahead Search system. This means faster, more accurate searches for the user with less typing.


2. Honoring User Specified Contextual Hints

It is often the case where the meaning of a search can be ambiguous when seen only one word at a time. The word ‘cute’ in the previous example could be a tag, part of a notebook title, or simply a keyword in the text of your notes. However, if you instead specified ‘tagged cute’, we look at the neighboring words to determine if you specified a contextual hint for the word. In this case the hint would be ‘tag’. We then verify that this is a good suggestion by doing a quick cross reference against your tag list to make sure that you actually do have a tag named ‘cute’.


3. Grammar Parsing

The next step is what makes Descriptive Search so powerful. A word such as ‘image’ may not appear as a keyword anywhere in your notes, and even if it did, when you type ‘image’ you are most likely not looking for a keyword in the text of your notes. You want to see notes with have attached image files. This is where parsing comes in.

What we have at the heart of this system is a Semantic Grammar that is very carefully handcrafted to meet the specifications of the particular user language, such as English. This grammar takes into account the nuances of the language with several very advanced parsing rules, even finding synonyms of the words we want to detect. This grammar is then cross compiled to the native platform which also gives us the benefit of maintainability and portability across platforms.

At runtime, we pass the preprocessed query through this semantic grammar parser which looks for character patterns that closely match one of the rules in our grammar. All matched character patterns are then replaced with an unambiguously formatted search token.

As seen in the example below, the pattern ‘image’ detected by the grammar parser is replaced by its equivalent evernote search grammar resource token.


4. Content Matching

While the Semantic Grammar can cover patterns that are common across all our Evernote users like the existence of date ranges or file types, there is another very important category of queries which is derived from the users own personal content, e.g. your own notebooks or tags or places you’ve been.  In this step we will close this gap by taking the unmatched words left in your query and checking them against your own user index.

Going back to our example query from the previous step (‘cat tag:cute resource:image/*’), the word cat is still unmatched. We cross reference the unmatched words in the query against the metadata in the search index to find that maybe you have a notebook named ‘cat pictures‘ and that you have a tag named ‘cats‘. In the case of multiple metadata matches like this, we use a probabilistic model to determine which of these suggestions closely matches your intent and query. Remember, if we happen to get this wrong, the Contextual Hint step can be used to provide more context about your meaning.

content match.png

5. Suggestion Creation

We have now found a match for all of the important words in your search. However pulling out the meaning of your query behind the scenes is only part of what we feel makes a search “Descriptive”. The user experience matters a great deal to us and we know you wouldn’t want to see a result in the form of “notebook:cats tag:cute resource:image/*”

The Suggestion Creation step formats the search grammar result into an an equivalent descriptive phrase in your language making the suggestions conversational and easy to read.


What’s Next?

If you haven’t already used this feature on the Mac client I encourage all of you to give it a try. If you need a little help getting started you can refer to our knowledge base article. At the same time, our team is hard at work bringing this experience to all the other platforms and increasing our support for different languages and a broader range of queries.

Keep an eye out for these new features with coming releases.

About Adam

adam-walzAdam Walz is an Augmented Intelligence Engineer at Evernote where he is focused on taking the Search experience to the next level. The Augmented Intelligence team is on a mission to improve the search experience for Evernote and make memories more discoverable.


[Opening] Join Adam and the ‘Augmented Intelligence’ Team – we are hiring! Software Engineer

Tagged , | Leave a comment

In depth: Pebble OAuth configuration using Node.js

 The following is a behind the scenes walkthrough on building apps for the Pebble Smart Watch by our lead wearables engineer, Damian Mehers. For our announcement of Evernote for Pebble, click here.


When Pebble released the new Pebble 2.0 SDK and app store, it suddenly became possible to do a whole lot more with Pebble apps than before, including setting up a configuration screen for your Pebble App.

In this post, I’ll take you through my journey of setting up the Pebble App configuration to use OAuth to authenticate to a web service (Evernote in this case), by way of Node.js.  I’ll also share my experience publishing the Node.js app to Amazon’s cloud services (AWS), and Microsoft’s cloud services (Azure).

How Pebble Configuration works

You might be wondering: how on earth can you do configuration on that tiny Pebble screen?  The answer: you can for some things, but for others, like authentication, you can’t.

What you can do though is configure your Pebble app via your phone’s nice, big screen, and then pass configuration data from the phone to the watch.

A typical Pebble app will have two components: a C app that runs on the Pebble watch and a companion app that runs on the phone.  The companion app communicates with the watch app and does the heavy-lifting of talking to the internet to access data, process it, and send it to the Pebble for display, interaction, and so on.

The phone-based companion apps can be written in Objective-C if they run on iOS, or in Java in they run on Android, or—my personal favorite—they can be written in JavaScript, in which case they run on both iOS and Android.

Your companion app’s JavaScript runs within a JavaScript engine contained within Pebble’s official Android and iOS management apps, which you can download from their respective app stores.


Each Pebble app that you create has a configuration file, called appinfo.json. In this file you can indicate that your app supports configuration by modifying the capabilities item:

  "uuid": "e0898619-eccd-4370-9141-2ce19b91c432",
  "shortName": "DamianDemo",
  "longName": "DamianDemo",
  "capabilities": [ "location", "configurable" ],

Once you do this, when you open the Pebble iOS app you’ll see a “Settings” button under the app you’ve developed (in this case ‘DamianDemo’).  Below I’m opening the iOS Pebble app (in an iOS folder I’ve called “Connected”), and I’m showing the Settings button.

image_thumb14 image_thumb15

What happens when you tap the “Settings” button? In the companion app’s JavaScript code you must register a callback to be invoked when the user tries to configure your app:

Pebble.addEventListener("showConfiguration", function() {
  var url = '...';
  console.log("showing configuration at " + url);

This will open up a browser within the Pebble iOS app to the URL you specify, and there you can display configuration options and eventually return data to your JavaScript companion app.  In this case, I want to display an OAuth authentication screen, authenticate the user, and return OAuth tokens back to my Pebble app.

I decided to use Evernote as an example.

Finding an OAuth example

I went searching for an Evernote OAuth example and quickly came across one that was based on Node.js at the Evernote GitHub repository:


I’d never used Node before, but it was very easy to install, add dependencies, and run:


Accessing the Node.JS Evenote OAuth example locally

I decided to make sure it was working properly by firing up a browser on my desktop, and, sure enough, it seemed to be working:



Invoking the OAuth example from the Pebble app

Next, I updated the Pebble’s JavaScript companion app, so that when the user clicked on the “Settings” button it navigated to the Node.js app on my desktop (my desktop’s IP address is

Pebble.addEventListener("showConfiguration", function() {
  var url = '';
  console.log("showing configuration at " + url);


Now when I tapped the “Settings” button I was indeed taken to the Node.js app running on my desktop:


Works locally but not from the Pebble iOS App?

Unfortunately the “Re-authorize” button did not respond when I tapped it.  But it worked when I accessed it through the browser on my desktop (same machine as the Node.js app). I went through all kinds of scenarios in my mind: perhaps some kind of JavaScript was disabled?  Maybe some re-direct wasn’t working?

After an embarrassingly large amount of time I discovered the cause when browsing through the Evernote OAuth sample code I was running: the code was set to redirect to localhost.  That was why it worked on the desktop, but failed on the iOS device.  I needed to make it redirect to the Node.js app, so I changed the code to redirect to the Node app I was running on my desktop, and it worked:

var Evernote = require('evernote').Evernote;

var config = require('../config.json');
// var callbackUrl = "http://localhost:3000/oauth_callback";
var callbackUrl = "";

// home page
exports.index = function(req, res) {

Changing the OAuth example to return values to the Pebble app

When I say it worked, I mean that it did what it was supposed to do, but I needed it to return the values to my JavaScript code.  This is the Node.js app code that gets invoked once the authentication is complete (in the redirect that wasn’t working initially):

// OAuth callback
exports.oauth_callback = function(req, res) {
  var client = new Evernote.Client({
    consumerKey: config.API_CONSUMER_KEY,
    consumerSecret: config.API_CONSUMER_SECRET,
    sandbox: config.SANDBOX

  console.log("Calling getAccessToken");
    function(error, oauthAccessToken, oauthAccessTokenSecret, results) {
      console.log("getAccessToken got " + oauthAccessTokenSecret);
      if(error) {
      } else {
        req.session.oauthAccessToken = oauthAccessToken;
        req.session.oauthAccessTtokenSecret = oauthAccessTokenSecret;
        req.session.edamShard = results.edam_shard;
        req.session.edamUserId = results.edam_userId;
        req.session.edamExpires = results.edam_expires;
        req.session.edamNoteStoreUrl = results.edam_noteStoreUrl;
        req.session.edamWebApiUrlPrefix = results.edam_webApiUrlPrefix;

You can see that it redirects back to the home page, but I want it to redirect back to the iOS Pebble app, so that it can hand the result back to my JavaScript companion app.  There is a standard way to do that, defined by Pebble.  You need to redirect to pebblejs://close passing any parameters you wish.

This is my updated code:

      } else {
        // store the access token in the session
        var result = { 
          oauthAccessToken : oauthAccessToken,
          oauthAccessTokenSecret : oauthAccessTokenSecret,
          edamShard : results.edam_shard,
          edamUserId : results.edam_userId,
          edamExpires : results.edam_expires,
          edamNoteStoreUrl : results.edam_noteStoreUrl,
          edamWebApiUrlPrefix : results.edam_webApiUrlPrefix
        var location = "pebblejs://close#" + encodeURIComponent(JSON.stringify(result));
        console.log("Warping to: " + location);


Deploying to a server

I wanted to run my Node.js OAuth helper app on a server, rather than on my local desktop, so that it would work even when my desktop wasn’t running.

I started off with Amazon Web Services

Amazon Web Services

Of course I jumped in far too quickly, and went and created an Amazon EC2 instance, which is complete machine, into which you can SSH and then install and configure whatever software you wish.

Turns out there was a far simpler way of doing things. Amazon’s Elastic Beanstalk, which lets you easily deploy Node.js apps, takes care of all the infrastructure behind the scenes, without my needing to perform all the configuration I’d done manually when setting up the EC2 instance.

It was still a little fiddly, with lots of little steps.

But once I’d deployed it, it did work just fine.  I was uncomfortable using “http” for the the Node app, since theoretically someone could sniff the packets and see the OAuth tokens in plain text, so I tried shifting to “https”. Try as I could, I couldn’t get it to work.  After much browsing and reading I came to the conclusion that I’d need to install my own custom certificates and my own custom domain to get it working, which seemed like overkill.

(If you know of a way of accessing Elastic Beanstalk apps using https without using a custom domain, please do let me know in the comments.)

Microsoft Azure

I decided to try Azure instead since, from what I’ve read, it supports https when accessing Node.js apps via the standard Azure hosting domain.

I was happily surprised at how easy it was to deploy the Node.js app to Azure.  I followed their tutorial and within 10 minutes I was up and running, pushing updates using a simple “git push”.  Now my JavaScript configuration launch code and OAuth callback both use https://mydomain.azurewebsites.net/.


Using a Node.js server to handle Pebble configuration such as OAuth was remarkably easy, and since I’d been writing so much JavaScript code in my phone-based Pebble companion app, it seemed natural to continue in Node.js.

I was able to take the Evernote Node.js OAuth example and by changing less than 10 lines of code, get it up and running and passing values back to the Pebble.  The use of https, combined with the fact that the Node.js app stores nothing locally (it just acts as a relay), makes for an attractive solution.

About Damian

DamianPortait10PercentDamian Mehers is a Senior Software Engineer at Evernote, currently focused on Evernote and wearable devices.  Damian created  Evernote for the Pebble and the Samsung Galaxy Gear.  He also worked on Evernote Food for Android, and created the initial release of Evernote for Windows Phone.

@Damian Mehers damian@evernote.com

Tagged , , , , | 5 Comments