Creating a Copy to Clipboard Button with Bootstrap

The other day, I was trying to implement a UI widget consisting of a text box containing some text and a button that would automatically copy the contents of the text box into the clipboard, not unlike the ones on GitHub repository pages or, similarly, the code listings on the Bootstrap documentation.

In particular, I wanted the behavior of the button to be the same:

  1. When hovering over the button, display a tooltip with the message "Copy to Clipboard"
  2. When the button is clicked and the text is copied,, the message on the tooltip changes to "Copied!"

Creating the textbox itself is easy: simply create a Bootstrap input group consisteing of a text input and a button addon with a tooltip:

<form>
  <div class="input-group">
    <input type="text" class="form-control"
        value="/path/to/foo/bar" placeholder="Some path" id="copy-input">
    <span class="input-group-btn">
      <button class="btn btn-default" type="button" id="copy-button"
          data-toggle="tooltip" data-placement="button"
          title="Copy to Clipboard">
        Copy
      </button>
    </span>
  </div>
</form>

The more involved part is the Javascript that wires everything together. Specifically, we want to do the following:

  • When we hover over the copy button, display the tooltip with the original "Copy to Clipboard" message.
  • When we click the copy button, copy the contents of the text input into the clipboard.
  • Once the contents of the text input are copied, change the tooltip message to "Copied!"
  • If we mouse over the button again, the tooltip again displays the original "Copy to Clipboard" message.

First, we need to initialize the tooltip according to Bootstrap's documentation:

$('#copy-button').tooltip();

That was easy. Next, we need to add a handler for the Copy button that would copy the contents of the text box into the clipboard. One way we can do this without using a third-party library is to first use the Selection API to select the text inside the text box and then execute the copy command using Document.execCommand() to copy it to the clipboard. For a detailed explanation, see this documentation.

$('#copy-button').bind('click', function() {
  var input = document.querySelector('#copy-input');
  input.setSelectionRange(0, input.value.length + 1);
  try {
    var success = document.execCommand('copy');
    if (success) {
      // Change tooltip message to "Copied!"
    } else {
      // Handle error. Perhaps change tooltip message to tell user to use Ctrl-c
      // instead.
    }
  } catch (err) {
    // Handle error. Perhaps change tooltip message to tell user to use Ctrl-c
    // instead.
  }
});

Once the text is copied, we also want to update the tooltip message. To do this, we can trigger a custom copied event to update the tooltip. Let's we add a handler to #copy-button to handle a custom event, copied, that contains the message to display on the tooltip.

$('#copy-button').bind('copied', function(event, message) {
  $(this).attr('title', message)
      .tooltip('fixTitle')
      .tooltip('show')
      .attr('title', "Copy to Clipboard")
      .tooltip('fixTitle');
});

Finally, we update the click handler for #copy-button to trigger copied events to update the tooltip message. Putting everything together, we have the following:

$(document).ready(function() {
  // Initialize the tooltip.
  $('#copy-button').tooltip();

  // When the copy button is clicked, select the value of the text box, attempt
  // to execute the copy command, and trigger event to update tooltip message
  // to indicate whether the text was successfully copied.
  $('#copy-button').bind('click', function() {
    var input = document.querySelector('#copy-input');
    input.setSelectionRange(0, input.value.length + 1);
    try {
      var success = document.execCommand('copy');
      if (success) {
        $('#copy-button').trigger('copied', ['Copied!']);
      } else {
        $('#copy-button').trigger('copied', ['Copy with Ctrl-c']);
      }
    } catch (err) {
      $('#copy-button').trigger('copied', ['Copy with Ctrl-c']);
    }
  });

  // Handler for updating the tooltip message.
  $('#copy-button').bind('copied', function(event, message) {
    $(this).attr('title', message)
        .tooltip('fixTitle')
        .tooltip('show')
        .attr('title', "Copy to Clipboard")
        .tooltip('fixTitle');
  });
});

Here is a live demo of this copy-to-clipboard widget in action:

The main downside of this approach is that the copy command is not supported in Safari. One way to mitigate this is to use queryCommandSupported and queryCommandEnabled to check whether the command is supported and fall back gracefully display a "Copy with Ctrl-c" message on the tooltip instead. In essence, this how the Clipboard.js library works, except wrapped up in a much more polished API.

Unfortunately, until the new HTML 5 Cipboard API is finalized and adopted by all major browsers, the only cross-browser way to reliably copy to clipboard is using Flash. This is the approach taken by libraries such as ZeroClipboard, which is, in fact, the library used by GitHub as well as the Bootstrap documentation. Hopefully, once the HTML 5 Clipboard API is available, adding such a simple feature will become much less of a hassle.

Tiles: An Easy Tool for Managing tmux Sessions

I use tmux extensively whenever I write code. Typically, I have about ten or so tmux windows open on my main tmux session and may have one or two other tmux sessions with fewer windows. My main tmux session is where I do most of my work, and typically, I keep one window per project or bug I am working on. I would use my other sessions for writing notes, doing operational tasks on the cluster, etc.

I found working with raw tmux commands to be cumbersome, so I wrote a simple Python script, Tiles, to make it easier for me to manage my tmux sessions, create tmux sessions with a predefined list of windows, and attaching to existing tmux sessions.

Tiles reads a .tiles configuration file in your home directory. The syntax of the Tiles DSL was inspired by that of the Bazel build system. The syntax is as follows:

tmux_session(
    name = "session-name",
    windows = [
        ["window-name", "/path/to/directory/for/window"],
        ...
    ],
)

Typically, my .tiles file on my home machine (where I often work on open source projects in my spare time) might look something like the following:

tmux_session(
    name = "default",
    windows = [
        ["tensorflow", "~/Projects/tensorflow/tensorflow"],
        ["bazel", "~/Projects/bazelbuild/bazel"],
        ["jsonnet", "~/Projects/google/jsonnet"],
    ],
)

tmux_session(
    name = "notes",
    windows = [
        ["notes", "~/Notes"],
        ["blog", "~/Projects/dzc/davidzchen.github.io"],
    ],
)

To launch a tmux session with the windows "tensorflow", "bazel", and "jsonnet", with each window startng in its respective directories, run:

tiles start default

Now, the "default" name is special, and running a tiles command without specifying a name will cause tiles to look for a session called "default". Thus, to start my default session, I can simply run the following command:

tiles start

A work, I generally keep my tmux sessions running all the time on my desktop and simply ssh in and attach to my tmux sessions. For example, to attach to an existing tmux session called "ops", simply run:

tiles attach ops

Tiles also has a handy tiles ls command, which simply runs tmux list-sessions to list the currently active sessions.

Some future improvements I planning to make to Tiles include:

  • Making tiles available on PIP
  • Configuring panes within each window
  • Supporting GNU Screen in addition to tmux

If you want to give Tiles a try, check out the Tiles website and documentation and repository on GitHub. Feel free to open an issue or send a pull request if you have any feature requests or find any bugs.

Building a Self-Service Hadoop Platform at LinkedIn with Azkaban

At this year's Hadoop Summit in San Jose, CA, I gave a talk on Building a Self-Service Hadoop Platform at LinkedIn with Azkaban. Azkaban is LinkedIn's open-source workflow manager first developed back in 2009 with a focus on ease of use. Over the years, Azkaban has grown from being just a workflow scheduler for Hadoop to being an integrated environment for Hadoop tools and the primary front-end to Hadoop at LinkedIn.

The abstract and slides are below. A video of my talk will be available in the coming weeks.

Abstract

Hadoop comprises the core of LinkedIn’s data analytics infrastructure and runs a vast array of our data products, including People You May Know, Endorsements, and Recommendations. To schedule and run the Hadoop workflows that drive our data products, we rely on Azkaban, an open-source workflow manager developed and used at LinkedIn since 2009. Azkaban is designed to be scalable, reliable, and extensible, and features a beautiful and intuitive UI. Over the years, we have seen tremendous growth, both in the scale of our data and our Hadoop user base, which includes over a thousand developers, data scientists, and analysts. We evolved Azkaban to not only meet the demands of this scale, but also support query platforms including Pig and Hive and continue to be an easy to use, self-service platform. In this talk, we discuss how Azkaban’s monitoring and visualization features allow our users to quickly and easily develop, profile, and tune their Hadoop workflows.

Slides

A Curious Case of GCC Include Paths

One time, I was building a large C++ codebase and encountered a number of compiler errors that appeared to be caused by constants defined in the system <time.h> not getting picked up. Curiously, it appeared that the time.h in the current source directory was being included instead, even though the include statement read:

#include <time.h>

From my understanding, the difference between rules for #include <header.h> and #include "header.h" was that the former searched a set of system header directories first while the latter first searched the current directory. Something was causing GCC to search the current directory for system headers.

To verify that this behavior was not caused by the project's build system, I created a simple Hello World source file hello.cc that included <time.h>:

#include <stdio.h>
#include <time.h>

int main(int argc, char **argv) {
  printf("Hello world.");
  return 0;
}

I created a time.h in the same directory that would raise a compiler error if included:

#error "Should not be included"

Sure enough, when I compiled test.cc, it raised the error:

$ gcc -o hello hello.cc
In file included from hello.cc:2:0:
./time.h:1:2: error: #error "Should not be included"

However, when I ran the same command as root, the compilation succeeded. This meant that there was something in the environment for my user that differed from that of root that is causing GCC to search in the current directory for system headers. This was when I remembered that I was setting CPLUS_INCLUDE_PATH in my shell startup script, which I set so that GCC would search other directories, such as /opt/local/include, since I use MacPorts.

I finally found that the reason that the current directory is being searched is that I set my CPLUS_INCLUDE_PATH as follows:

export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/opt/local/include:...

Appending paths to path variables this way seemed innocuous since most of us follow this convention when adding to our $PATHs, but in this case, it turned out to not be so harmless.

Because $CPLUS_INCLUDE_PATH is not by default, the first entry is an empty string. One would expect that an empty string would simply be skipped, as is the case for $PATH. However, I started to wonder whether an empty string in the CPLUS_INCLUDE_PATH actually signified to GCC that the current directory should be searched. A simple test proved that it did:

$ export CPLUS_INCLUDE_PATH=/opt/include
$ gcc -o hello hello.cc
$ export CPLUS_INCLUDE_PATH=:/opt/include
$ gcc -o hello hello.cc
In file included from hello.cc:2:0:
./time.h:1:2: error: #error "Should not be included."

I eventually found that this was actually an obscure feature of GCC. I am curious to know why this feature was implemented in the first place. The only use case that comes to mind is to get #include <header.h> to behave exactly like #include "header.h", which seems more like a hack than a valid use case.

Gradle Dust.js Plugin

LinkedIn Dust.js is a powerful, high-performance, and extensible front-end templating engine. Here is an excellent article comparing Dust.js with other template engines.

After learning Gradle, I have been using it almost exclusively for my JVM projects. While Dust.js plugins have been written for Play Framework and JSP, but it seems that nobody had written one for Gradle to compile Dust.js templates at build time.

As a result, I wrote my own, which is available on GitHub. The plugin uses Mozilla Rhino to invoke the dustc compiler. You do not need to have Node.js or NPM installed to use the plugin.

Using the plugin is easy. First, add a buildscript dependency to pull the gradle-dustjs-plugin artifact:

buildscript {
  repositories {
    mavenCentral()
  }
  dependencies {
    classpath 'com.linkedin:gradle-dustjs-plugin:1.0.0'
  }
}

Then, apply the plugin:

apply plugin: 'dustjs'

Finally, configure the plugin to specify your input files:

dustjs {
  source = fileTree('src/main/tl') {
    include 'template.tl'
  }
  dest = 'src/main/webapp/assets/js'
}

At build time, the dustjs task will compile your templates to JavaScript files. The basename of the template file is used as the current name. For example, compiling the template template.tl is equivalent to running the following dustc command:

dustc --name=template source/template.tl dest/template.js

Please check it out and feel free to open issues and pull requests.