Building a React App from scratch – A retrospective guide

May 2017

It’s been a while since my last article, but I’ve been quite busy. After dipping my toes into the React framework, I took on a larger, more app-like project to really get a feel for how the ubiquitous library operates.

A disclaimer before you read on: This is my second React project – My first one of any size. These are the words of a beginner, and should not be taken as gospel – Hopefully though, you’ll find this article helpful in your own development.

There’s almost nothing in the web industry that invokes the focus of a developer as the ominous deadline. As much as we cannot stand being burdened by the knowledge that our creativity is limited by time, it is also freeing, in a sense, to know that we can work towards a finite goal.

That’s exactly how I felt back in mid February when my Son’s primary school sent an email out to all of the parents asking for volunteers to come in and chat about their jobs. “An opportunity!” I thought to myself — and thus, Tag was born.

And so ensued two months of spare evenings diving head first into the React framework. The premise was simple: Create a lightweight single-page app aimed at teaching 10 year olds how web pages are put together.

A spec, of sorts

At minimum, the app should:

  • Teach children the relationship between the raw HTML and what you see in the browser.
  • Engage and excite children with a high turnaround for visual feedback.
  • Be easy to use but flexible enough to provide a unique experience for each child.

I knew how the app should work, and for the webpage being built, my first thought was inspired by a recent Wes Bos tutorial as part of his Javascript 30 course: A simple page which would play musical notes when pressing a corresponding key. After some thought — and the realisation that I couldn’t guarantee all the computers in the school would produce sound — I settled on a similar idea, but instead of sound, it would produce a neon light effect on the letters inserted.

Diving in at the deep end

If it’s deadlines that keep you focused, the opposite could be said for learning a new framework, library or toolkit during development.

I’ve been working with basic JavaScript and the common heavyweight frameworks for over 15 years now; but when one doesn’t have the luxury of experimenting with a new bit of kit in a sterile environment, it can lead to frustration, and worse, delays in development. This was mainly the case for discovering the virtues of Babel, Webpack and how to unit test while using ES2015 style JavaScript.

After much juggling of node modules and npm i commands, I’d finally settled on the following environment for the App:


  • interact.js – for drag and drop.
  • promise – to shim ES6 promise support when needed.
  • popper.js – for accurate constant alignment of pop-up elements. (Although I’m still not totally sold on this one.)
  • react, et al – for the view/controller framework.
  • redux – to store and manage both volatile and stored state.
  • superagent – for performing AJAX requests, mainly.
  • Some of the browser-based APIs also used include LocalStorage, postMessage and the JSON libraries.

Build tools

  • Webpack – (later replaced with Rollup) for building a deployable bundle.
  • Babel – to convert code from ES2015 to ES5 for Node compatibility.
  • postcss – to leverage autoprefixer and cssnano for automated vendor prefixing and minifcation respectively.
  • eslint – as part of the QA process.

Testing tools

  • mocha – one of the best node.js testing frameworks around – extended with chai for making expressive assertions.
  • mockery – for mocking internal and external libraries during testing.
  • JSDom – to provide the test suite with a simulated DOM environment.

Node-based library tools like npm and yarn make it incredibly easy to manage these dependencies, lock them into the required versions and to experiment as needed. The main drawback of using these tools is that you must keep an eye on what you do and don’t need, to avoid creating giant lists of unnecessary dependencies.

Why I built for ES5

ES2105 gives us a great deal of useful new language features. Arrow functions, expressive variable declaration, Array and Object tools, as well as the potentially incredibly powerful import and export statements. Not to mention Promises, even if they’ve been around in the form of a Polyfill for a while now.

The list of these features not available in Node is shrinking by the week, but while it is in any way lacking, a transpile process must be performed. The singular most important reason for this is that application developers will be likely importing your library within node, and thusly, it should work in their environment with minimal fuss.

This is why there is an ES5 compatible script defined as the entry point to the library:

// package.json, built with
  "main": "dist/index.js",

A point of note here is that Tag provides two include-able files. njp-tag for the app container, and njp-tag/view which operates the view frame.

Unit testing

I’ll be honest – with a few exceptions (such as ‘FormField’, ‘Droplet’, and ‘PropTypes’) this project was not developed employing any sort of TDD methodologies.

Some of the tests are still to be written — albeit with great care as to not let the current functionality influence the test cases too much.

That being said, the framework of choice for this project was Mocha. A fantastic node-based testing framework with browser support. Alongside that was Chai.js, which provided me with very expressive assertions that read like plain English.

 // test.Droplet.js
// test.PropTypes.js
expect(() => PropTypes.stringNotEmpty('')).to.throw(Error, 'Error in Droplet');

As for the testing environment itself, I opted instead of working with the browser to work exclusively within the CLI. This meant that further integration with CI tools like Travis would be much easier, without having to rely on libraries like PhantomJS.

This did present one problem – I wouldn’t have a DOM. This issue was quickly solved with JSDom. A JS-based DOM implementation which very effectively simulates a near-browser environment. You don’t get rendering, but for a test environment, it wasn’t needed.

Debugging tests also proved problematic. As I’m sure you know, the debugging abilities of node are nowhere near as good as the Chrome Dev tools, but thankfully you can now use those too:

// package.json
  "scripts": {
    "test": "NODE_ENV=test mocha",
    "test-live": "NODE_ENV=test-live mocha --inspect --debug-brk"

The two scripts defined here provide both testing environments – npm run test will fire up the basic test suite which is entirely CLI based, and npm run test-live gives me a Home debugger URL which can be used to set breakpoints and use the debugger statement within the suite or the tested code. I thoroughly recommend looking into it.

For my next project, I’ll be giving proper TDD a go, as the benefits were clear during the development of the more complex classes produced within this app.

Organising a React app

The two entry points to the app, which exist as files within the src folder, are named src/Index.js and src/View.js. The former doesn’t do much at all, except for include the main class, src/lib/App, and the latter is more self-contained, including its own class code in the file itself.

For the rest of the structure, it’s been organised as follows:

  • assets/ – Almost exclusively Objects, some of these files export more than one member. The similarity between them all is their provision of data and constants for use throughout the app.
  • components/ – The main React view files, comprising almost all of the viewable UI.
  • components/containers/ – Contains the Redux-aware containers (as well as the main Canvas class). These classes act almost as controllers.
  • components/dialogs – Technically pure view files, but mainly concerned with the display of dialogs.
  • components/views – The rest of the view files – a mix of class and function based react components.
  • lib – JS library classes, used to provide support for Droplets, templating, cross frame communication, drag & drop, field management, validation and the tour UI.
  • state – Redux state specific functions and utilities.
  • styles & img – These are both self explanatory, but worth mentioning that they are bundled with Rollup.

There appears to be no “right” way to organise a React app, and the non-descriptive nature of the framework is both a pleasure to use and can sometimes foster confusion with those starting out for the first time. However, many React developers seem to agree that components should be organised depending on their complexity and whether or not they also act as controllers.

For those wanting to read more about the difference between presentational and container components, Dan Abramov (the author of Redux) wrote a great guide.

Validating my own propTypes

One part of the way Tag can be implemented is by defining custom droplets. These can be combined into a Palette, which provide users with a list of items to be dropped onto the template.

Droplets can be customised in various ways. Here’s an example:

// a class attribute droplet
    'name': 'Sign',
    'dropletType': 'attribute',
    'key': 'class',
    'value': 'bricks',
    'attachmentIds': ['sign_class'],
    'guidance': '<p>Text explaining how the droplet works.</p>'

The above is a single droplet, which has a type of ‘attribute’. This means when rendered with the Template class, it will appear as a DOM element attribute (class="bricks"). There are three droplet types in total, each with varying properties that change how they operate. For instance, the following droplet is editable before being placed:

// an element attribute droplet
    'name': 'Letter',
    'dropletType': 'element',
    'innerHTML': '',
    'tagName': 'a',
    'attrs': {
        'href': '#',
        'class': 'white'
    'attachmentIds': ['letter'],
    'editable': {
        'attrs': {
            'class': {
                'type': 'dropdown',
                'required': true,
                'label': 'Choose a colour',
                'options': ['white', 'red', 'yellow', 'pink', 'blue', 'green', 'teal'],
                'value': 'red'
        'innerHTML': {
            'type': 'text',
            'required': true,
            'label': 'Type one letter',
            'placeholder': 'A, B, C etc...',
            'maxlength': 1
    'guidance': '<p>Text explaining how the droplet works.</p>'

The varied number of values and properties called for a validator, which could ensure that droplets being added to Tag are constructed properly and won’t cause operational problems. As such, I created a basic prop type validation system which could work in a similar way to React’s own tools.

When parsing a Droplet, the following basic requirements are invoked:

// src/lib/Droplet.js
// initialisation - validate main props
], this);
// src/lib/Droplet.js
_validateAndSet(values, context) {
    // validate each value as a prop key, setting into `context` as a value
    values.forEach((value) => {
        if (Droplet.PropTypes.hasOwnProperty(value)) {
            // validate the prop
            if (Droplet.PropTypes[value](
           || null,
                    this._originalSettings.dropletType || null
                )) {
                // prop was valid, set into context
                context[value] = this._originalSettings[value];
        } else {
            // prop isn't registered
            throw new Error('Droplet property "' + value + '" definition does not exist.');
// src/lib/droplet.js
Droplet.PropTypes = {
    value: PropTypes.string.isRequired,
    name: PropTypes.string.notEmpty.isRequired,
    attachmentIds: PropTypes.arrayOf.string.isRequired,
    dropletType: PropTypes.string.isRequired,
    attrs: PropTypes.object,
    tagName: PropTypes.string.notEmpty.isRequired,
    innerHTML: PropTypes.string,
    editable: Droplet._validateEditableSet,
    key: PropTypes.string.notEmpty.isRequired,
    guidance: PropTypes.string

When parsing one of the three custom types of droplet, additional rules are created on the fly, but broadly, the above rules must be adhered to for every droplet.

I won’t go into too much detail as to how the code works, although you are welcome to see it here. To summarise, a chain function joins multiple validation functions together to produce objects with the property chains you see above, and runs them in turn using the assert function on each test. If any of the tests are falsy, an Error is thrown.

This worked well enough for my needs, although I am sure that the more complex chaining that React provides is much more impressive, this was a fun experiment in creating my own prop validator.

Working with Redux

Redux is a strange library. While the entire codebase is just over 600 lines in total, the effect it can have (if used properly and appropriately) can be profound. It is entirely possible for your implementation of Redux to be bigger than the library itself.

What Redux gives us is a consistent method to manage the state of an application in a reliable way. Its main intention being to avoid unpredictable systems and potential side effects that can arise from implementing state management without the strict rules Redux provides.

So naturally, the first thing I did was break those rules.

Well, not quite – The main detraction I made from the recommended path was to allow for the storage of functions within the store. This has meant that I’ve had to organise my code slightly different from how it might usually be done. The summarised default state tree defined in Tag is as follows:

// src/assets/default-state.js
export default {
    app: {
        ... global application state...

    zones: {},

    UI: {
        ... non-persistant UI state...

It is within the UI portion of the state that functions are allowed to exist. Specifically, functions that provide callbacks for events such as the confirmation of dialogs.

I realised during development of the store that while persistance of the state (using localStorage), would be useful, not everything needed to persist. Especially not: the presence of dialogs; the stage of a tour taking place; or the presence of a tooltip against a UI element. These variables would be stored safely within UI while the rest of the persisting state could be stored elsewhere. A simple piece of logic within instantiation therefore manages this digression:

// src/lib/App.js
var stored_state ='state', undefined);

if (stored_state !== undefined &&
    typeof stored_state === 'object') {
    // stored state exists - reset UI (which is non-persistant)
    stored_state.UI = {};
    stored_state.UI = defaultState.UI;

And for saving the state, which is done during specific actions:

// src/state/reducers.js
function storeState(state, key) {
    var current_state = storage.get('state'),

    new_state = Object.assign({}, defaultState, current_state);
    new_state[key] = Object.assign({}, state);
    new_state.UI = null;

    storage.set('state', new_state);

    return state;

This allowed me to store the state I wanted to keep, and trash the state I didn’t.

In case you’re wondering how the Redux dev tools handle monitoring of state changes within the function variables of the UI property — it doesn’t. The actionsBlacklist property of the extension instantiation is convenient for ignoring the actions that would affect such data.

this._store = createStore(
        typeof window !== 'undefined' &&
        window.__REDUX_DEVTOOLS_EXTENSION__ &&
            // black list all session-based non persistant actions
            // (some of which contain unserialisable objects)
            actionsBlacklist: [
                ... and so on...

Putting it togther

Initially, the two main Tag entry points were converted to IIFEs with Webpack. The benefit of this was obvious – It would run in a browser, which is after all, the main goal. This worked well enough, and I still use it for implementations of Tag. However, I discovered a new library called Rollup.

After reading more about Rollup, and learning why and when I could use it, it felt more like a better fit to the library than Webpack. Especially as it was easier to produce immediate results from initial testing, and the code it produced felt more lightweight. Webpack contains extra code for perfectly legitimate reasons, but as a library being included in the node environment, my main concern was providing CommonJS require compatibility, not full browser support. That could come during implementation.

The configuration for rollup at a very basic level is simple. However, if you have unique requirements then you might find that the simple configuration quickly turns into using the JavaScript API. An example of its use is below:

// build/rollup.js
    entry: file,
    cache: cache[file],
    plugins: options.plugins,
    external: options.external
}).then(function(bundle) {
    // cache the output for speed
    cache[file] = bundle;

    // write the output file
        format: options.format,
        intro: prepend,
        dest: path_dest + file_bundle,
        sourceMap: options.sourceMap ? 'inline' : false

The premise is quite simple: Take an entry point, transpile it with all the plugins defined, then write to an output file in the format required. Source maps optional.

There’s also some caching in use here – the idea being that the output stream returned by rollup() can be given as the cache property of the next run. This is useful when using a watching environment such as node-watch and Rollup can detect whether or not to re-compile either of the two entry scripts.

Rolling up SVG sprites

One thing I did find lacking within Rollup was the ability to process SVGs. There’s support for plain text includes and images, but nothing I could find that will specifically create an SVG sprite which can be referenced using a combination of use tags and path IDs.

To solve this, I wrote a script which pre-generates a sprite sheet for importing into the app data as a JSON file:

 * Returns JSON-based sprite data
module.exports = function(dest) {
    // initialisation code, removed...

    return new Promise(function(resolve, reject) {
        // collect files with glob
        glob('src/img/**/*.svg', function(error, files) {
            if (error) reject(error);

            // loop through all of the collected files
            files.forEach((file) => {
                // parse the SVG XML and add to the SVG string
                data.svg += parseSVGXML(file);
                // store glyph ID based on filename
                    path.basename(file).replace(/\.svg$/, '-sprite')

            // finally, write into file
            fs.writeFileSync(dest, JSON.stringify(data));


This isn’t the whole script — there’s also a function which, given the XML data of an SVG image, parses into JSON and processes it for compatibility with use as a sprite frame, as well as another function for converting the JSON data back into XML.

The end result is a file in JSON format which contains both the sprite itself, and an Array of IDs which can be defined by use tags for showing specific sprite frames. A small React component called Icon then takes care of showing the frames in the following way:

// components/views/Icon.jsx
<svg className="icon" width={props.width} height={props.height}>
    <use xlinkHref={props.glyph} className={className}/>


Overall, it’s been a fun process to work within the React framework, and massively satisfying to see the kids engage with the app and get excited about seeing the end result.

There are some things I’d have done differently given another chance:

I’d have started using Redux sooner (or rather, fully assess whether or not the app needed Redux, sooner). There was far too much back-tracking on an existing half-baked state system in order to implement Redux, but I consider that just to be another benefit of working within the strict ruleset.

Rollup will most likely be my transpiler of choice for the next library. Webpack absolutely has a place for the likes of single page apps and other projects that benefit from a hot-loading package and assets system, but Rollup is much more suited to my needs for creating libraries that need to work in the environments for which they are written.

I’m vowing to work more with TDD in mind for next time. It is often the case that while any form of unit testing is better than integration testing or monkey testing alone, the practice of creating failing tests first and working to them as an assertable spec ensures developers remain true to the nature of the app and code more effectively.

What’s next?

There’s still much more work to be done with the app. I’ve taken a short break from coding for a week or two, but there are documents to write, unit tests to complete and plenty more to learn.

If you’re interested in browsing the source code, or trying out the app, you can see the app on Github, or have a play with it online.

Leave a Reply

Your email address will not be published. Required fields are marked *