Posts

 

How We Talk by N. J. Enfield

This book offers a thought-provoking introduction to the mechanics of conversation. It creates a forceful case for the importance of timing, repair, and the procedural utterances we use to construct successful conversations. Enfield methodically unveils the hidden and unconscious scaffolding we use as we talk to each other. Who knew "Huh?" was so important an element in spoken language? Well-laid-out and easy to read.

A small gripe is that the academic rigor of laying bare all the supporting points sometimes gets in the way of the fascinating insights from the research. If you are at all interested in the field of conversation design is book is well worth the read.

Published: Nov 2017
ISBN‏ :  978-0465059942
https://www.hachettebookgroup.com/titles/n-j-enfield/how-we-talk/9780465059942/?lens=basic-books

Reading notes:

A thread through the book is that standard linguistic theory is focused only on written forms of language. Ruling out a lot of the elements of real conversation with all its errors, corrections and collaborative flow controls. Enfield tasks himself with correcting this.

“Linguistic theory is concern primarily with an ideal speaker-listener…”

 Conversation Has Rules

The second chapter goes well beyond the concept of 'politeness' often used to describe the rules of conversation.

“By definition, joint action introduces rights and duties. As Gilbert says … each person (in a conversation) has a right to the other’s attention and corrective action. Each person has a moral duty to ensure they are doing their part.”  

Entering into a conversation is entering a contract for collaborative joint action, with well-defined rules and roles. The contract lays out many corrective measures to repair and realign a conversation if a party believes the other has strayed. I thought the examples of how people correct each other were fascinating. They border into areas of control and power in relationships of those talking.

This contract with others to allow them to correct us could cause a profound point of conflict for conversational interfaces between humans and AI, once AI can achieve the ability to join a turn-taking conversation.

Split-Second Timing

The third chapter looks at the importance of timing. It slowly builds a picture that timing is not just an end product of the thought process but is used to convey meaning and help us structure the conversational flow.

A study of Dutch found that 40% of turn-taking transitions in conversation occurred within a 200ms window on either side of zero, with 85% occurring within 750ms of zero. Similar results were found in studies of English and German.

The time to form a conservational response “from intention to articulation”
175ms.            Retrieve concepts
75ms               Concepts to words
80ms               Words to sounds – phonological codes
125ms             Forming the sounds into syllables
145ms             Executing the motor program to pronounce the words
600ms             Total

If the typical turn-taking gap in English is 200ms, then people are starting to form a response well in advance of the last speaker finishing. There is a measurable percentage of people overlapping the end of the last turn, although the total time two people are speaking is relatively small at 3.8%, meaning even overlaps are well-timed.

“In their 1974 paper on the rules of turn-taking, Sacks and colleagues identified this ability to tell in advance when a current speaker would finish, referring to the skill as projection.”

Among other signals, we use both pitch and the length of the last syllable to signal to others that the end of a turn is coming, i.e., prosody.

“…study shows that the signals for turn ending combine serval features of sound of utterances, as well as the grammatical structure of the utterance.   … the fact that grammar alone cannot be sufficient”

 The One-Second Window

 The preferred and dispreferred responses to the last turn have different time gaps. A preferred response to the question “Can you come out tonight?” would be "yes" or "no" and would be answered promptly. A dispreferred response could be “I don’t know, I will have to check my calendar.” These dispreferred responses are typically in the late zone, at about 750ms.

“In nearly half the dispreferred response, the first sound one hears is not a word at all, but an inbreath (or click, that is a “tut” or “tsk” sound)."  

“…people are now able to manipulate timing to send social signals about how a response is being packaged.”  

“…also see “well” ad “um” playing a role in packaging and postponing certain kinds of response in conversation.”

The Traffic Signals

The chapter covers the use of “um” and “uh” in great detail. It concludes that these little words are part of the conversational machine, allowing us to signal a brief delay and forestall the handing over of a turn due to a longer gap than usual. These signals are very frequent, with men making use of them every 50 words and women every 80 words.

“In written language, the reader does not directly witness the act of production.”

Transcripts are often cleaned up, removing inevitable problems with the choice of words, pronunciation, and content. With conversation, this process of production is visible.

“The use of these little traffic signals such as “uh/um”, “uh-huh” and “okay” all illustrate ways in which bits of language are used for regulating language use itself.”

They are the procedural directional instructions of conversation. They form part of the joint commitment to a conversation.

Repair

On average, we need to repair an informal conversation every 84 seconds. These repairs are a normal part of our conversations, and we have ways of signaling a correction, much like we do for a small delay. Our ability to do this is what keeps a conversation on track and moving at speed.

“Hardly a minute goes by without some kind of hitch: a mishearing, wrong word, poor phrasing, a name not recognized.”

  • draft
  • book

Conversation With Things by Diana Deibel and Rebecca Evanhoe

Conversation With Things is a fascinating journey into the world of conversational design. Authors Diana Deibel and Rebecca Evanhoe have gone to great lengths to produce the introduction they wish they'd had when they started. Two chapters, 'Talking like a Person' and 'Complex Conversations', truly demonstrate an understanding of the subject and convey the feeling that they are grounded in many years of practice and analysis.

Although the book was written before generative AI burst into the world, it remains relevant. The chapters on defining intents and documenting conversational pathways could easily be seen now as methods of evaluation and testing for human alignment with LLM-based conversational tools. I hope the authors will one day consider a second edition that encompasses practices in the age of generative AI.

ISBN: 978-1933820-26-2
Published:   April 2021
https://rosenfeldmedia.com/books/conversations-with-things/

Reading notes:

Taking Like a Person

The second chapter, titled "Talking Like a Person," explores various layers of complexity in human conversation. These themes interweave like the conversation structures they describe:

  • Conversation is co-created: Participants collaborate to achieve a shared goal or outcome.

  • Prosody and intonation are fundamental to spoken language, forming part of its structure rather than merely being add-ons in dialogue construction.

  • Turn-taking is the interplay through which conversation forms, encapsulating power structures and much more than its mechanics initially suggest.

  • Conversation unfolds in a messy manner; it is structured but not always in a formal exchange of turn-taking.

  • Repair: We are constantly repairing our conversations, bringing them back to a point where the process of co-creation works. According to Nick Enfield, this occurs approximately every 84 seconds when two people speak.

  • Accommodation involves a chain reaction of adjustments in response to each other and the situation during conversation.

  • Mirroring, or "limbic synchrony," entails matching posture, expressions, and gestures, as well as speech elements like pace, vocabulary, pronunciation, and accents; this process is called convergence.

  • Code-switching requires presenting different identities to elicit an outcome.

  • Politeness goes beyond a list of social constraints such as not licking a bowl in a restaurant; in conversation, it serves as a contract between parties. When considered alongside the concept of repairing, it leads to more dynamic and fluid ideas, as expressed by Onuigbo G. Nwoye:

"It’s a series of verbal strategies for keeping social interactions friction-free."

The authors regard Grice's Maxims as a somewhat simplistic foundation for conversation, covering cooperative principles but missing some important elements of conversational theory and design addressed in the points above.

The Rest of the Book

I read this book as part of my research into linguistic user interfaces being built around LLMs and AI chat. While other chapters in the book contain a wealth of valuable material, I am only pulling out a few subjects that have specific interest to me at this time.

Common question types

In the section dealing with scripted flows, the author identifies some of the most common question types posed to users. The book outlines six common question types which form the construction of turn-taking in older conversational tools:

  1. Open-ended

  2. Menu

  3. Yes-or-no

  4. Location

  5. Quantifying

  6. Instructional

 There are a couple of useful UI elements that work with these questions: a conformational component and repeating request.

Cognitive load with task order, lists, and prosody

Ordering tasks is crucial for reducing cognitive load. This seems to be most related to the sequencing of instructions.

Spoken lists significantly increase cognitive load for users. The authors delve into detail on reducing complexity to enhance recall, which greatly impacts menu structures.

The absence of prosody in the simple conversion of written text to TTS (Text-to-Speech) significantly increases cognitive load. In the past, SSML (Speech Synthesis Markup Language) has been utilized and text has been scripted in a dialogue style.

"Human conversation is multimodal"

Human conversation is multimodal. This simple statement was one of the strongest messages I took away from the book. We operate by blending all our senses simultaneously to facilitate conversation flow. We employ visual body language along with prosody and the content of our words to communicate effectively.

Lisa Falkson of the Alexa team found that when users are presented with visual and audio information together, they often mute the audio to focus on the visual information.

You can still utilize visual and audio information if you are employing strong visualization reinforced by audio. Lisa uses Alexa’s weather as an example. If you do this, the elements need to be synced. Lisa refers to this as the "temporal binding window" of 400 milliseconds.

Follow up links and reading:

https://www.cambridge.org/core/books/using-language/4E7EBC4EC742C26436F6CF187C43F239
https://www.researchgate.net/publication/231870679
https://onlinelibrary.wiley.com/doi/book/10.1002/9781118247273
https://en.wikipedia.org/wiki/Turn-taking
https://en.wikipedia.org/wiki/Conversation_analysis
https://www.hachettebookgroup.com/titles/n-j-enfield/how-we-talk/9780465059942/?lens=basic-books

  • ux
  • design
  • conversational-design

Conversational Design by Erika Hall

Conversational Design could have been a straightforward view on the emergence of voice interfaces such as Alexa or the UI of customer service bots that live in the bottom right-hand corner of many sites. Instead, Erika Hall explores the rich possibilities of conversational language in UX from a more holistic and inquisitive standpoint. It's a much better book because of that.


ISBN: 978-1-952616-30-3
Published:   March 6, 2018
https://abookapart.com/products/conversational-design


Reading notes:

The Human Interface

The first section of the book really sings with its take on the history of spoken language at the centre of human interaction. Burned in my mind are two important insights it brings to the forefront.

  • In oral culture – “All knowledge is social and lives in memory”

  • A written culture – “Promotes authority and ownership”

“These conditions may seem strange to us now (oral culture). Yet, viewed from a small distance, they’re our default state. Because our present dominate culture and the technology that defines it depends upon advanced literacy, we’ve become ignorant of the depths of our legacy and blind to the signs of its persistence.”


The contrast in power dynamics is pronounced, transitioning from the shared ownership of knowledge in oral traditions to the individual possession we see today. Our current written and digital culture is held to together with concepts of individualism, intellectual property, single sources of authority and ownership at its heart.

The book outlines the key material properties of oral culture as:

  • Spoken words are events that exist in time.
    It’s impossible to step back and examine a spoken word or phrase. While the speaker can try to repeat, there’s no way to capture or replay an utterance.

  • All knowledge is social and lives in memory.
    Formulas and patterns are essential to transmitting and retaining knowledge. When the knowledge stops being interesting to the audience, it stops existing.

  • Individuals need to be present to exchange knowledge or communicate.
    All communication is participatory and immediate. The speaker can adjust the message to the context. Conversation, contention, and struggle help to retain this new knowledge.

  • The community owns knowledge, not the individuals.
    Everyone draws on the same themes, so not only is originality not helpful, it is nonsensical to claim an idea as your own

  • There are no dictionaries or authoritative sources.
    The right use of a word is determined by how it’s being used right now

I am now reading Walter Ong “In Orality and Literacy: The Technologizing of the Word” which is reference in this part of the book.

Principles of Conversational Design

The 'Principles of Conversational Design' chapter begins by defining an interface as 'a boundary across which two systems exchange information.' It quickly moves to argue that conversation is the original interface and remains the most widely understood and utilized.

Hall then explores the language philosopher Paul Grice's work on the four conversational maxims, and Robin Lakoff's work on 'The Logic of Politeness,' which introduces a fifth maxim.

The conversational maxims are the cooperative foundation or rules by which humans communicate effectively through conversation, referred to by Grice as the 'Cooperative Principle.

The conversational maxims

  1. Maxim of Quantity: Information

    1. Make your contribution as informative as is required for the current purposes of the exchange.

    2. Do not make your contribution more informative than is required.

  2. Maxim of Quality: Truth (supermaxim: "Try to make your contribution one that is true")

    1. Do not say what you believe to be false.

    2. Do not say that for which you lack adequate evidence.

  3. Maxim of Relation: Relevance

    1. Be relevant.

  4. Maxim of Manner: Clarity (supermaxim: "Be perspicuous")

    1. Avoid obscurity of expression.

    2. Avoid ambiguity.

    3. Be brief (avoid prolixity).

    4. Be orderly.

  5. Maxim of Politeness (Robin Lakoff)

    1. Don’t impose

    2. Give options

    3. Make the listener feel good

The Rest of the Book

The remainder of the book explores the practice of implementing Conversational Design within UX. It is well-written and contains a wealth of valuable material. At this point in time, my interest lies in researching the new linguistic user interfaces being built around LLMs (large language models) and AI chat. The first half of the book delivered that for me.

There are a number of points to extract from the rest of the book:

When confronting a new system, the potential user will have these unspoken questions.

  • Who are you?

  • What can you do for me?

  • Why should I care?

  • How should I feel about you?

  • What do you want me to do?”



The last questions “What do you want me to do?”. is reflected on later in the book by quoting Jim Kalbach’s work with navigation

  • Expectation setting: “Will I find what I need here?”

  • Orientation: “Where am I in this site?”

  • Topic switching: “I want to start over”

  • Reminding: “My session got interrupted. What was I doing?”

  • Boundaries: “What is the scope of this site”

Follow up links and reading:

https://abookapart.com/products/conversational-design
https://en.wikipedia.org/wiki/Walter_J._Ong
https://en.wikipedia.org/wiki/Paul_Grice
https://en.wikipedia.org/wiki/Robin_Lakoff
https://en.wikipedia.org/wiki/Cooperative_principle
https://experiencinginformation.com/

  • ux
  • design
  • conversational-design

This is title

This is test text

I am a tech founder, equally as passionate about business stragegy, digital product creation, information design and code. For the many years I have been addicted to exploring data portability and user experience. When possible, I speek at web events.

  • draft

Text Wrangling and Machine Learning- Edinburghjs talk

More organisations are looking to automate tasks and gain insights from large amounts of text. JavaScript is a powerful language for text wrangling and machine learning, allowing developers to quickly and easily manipulate text data. We will look at different approaches from rules-based parsing to neural nets.

Exploring what JavaScript frameworks, workflows and processes are available to build real world apps. This is a gentle introduction which should be interesting and useful to all JavaScript coders.

Video
The talk: Starts at 46:30

Slides
The slides are in a github repo with repo with some example code used in the presentation. The code is .nnb javascript notebook format

The talk was given at CodeClan on Castle Terrace, on Thursday 23rd March 2023

  • draft
  • ml
  • llm
  • javascript

Hosting multiple node sites on Scaleway with Nginx and Lets Encrypt

Two years ago I wrote a blog post about hosting Node.js servers on Scaleway. I have now started using Nginx to allow me to host multiple sites on one €2.99 a month server.

This tutorial will help you build a multi site hosting environment for Node.js servers. We are going to use a pre-built Ubuntu server image from Scaleway and configure Nginx as a proxy. SSL support will be added by using the free letsencrypt service. I have written this post to remind myself, but hopefully it will be useful to others.

Setting up the account

  1. Create an account on scaleway.com – you will need a credit card.
  2. Create and enable SSH Keys on your local computer, scaleway have provided a good tutorial https://www.scaleway.com/docs/configure-new-ssh-key/. It's easier than it first sounds.

Setting up the server

Scaleway provides a number of server images ready for you to use. There is Node.js image but we will use the latest Ubuntu image and add Node.js later. At the moment I am using VC1S servers which is a dual core x86.

  • Log into the Scaleway dashboard and click “Create server”
  • Select the VC1S server and the latest Ubuntu image currently Xenial.
  • Finally click the "Create Server" button.
Image of scaleway dashboard
  1. Within the scaleway dashboard navigate to the "Servers" tab and click "Create Server".
    1. Give the server a name.
    2. Select the VC1S server and the latest Ubuntu image currently Xenial.
    3. Finally click the "Create Server" button.

It takes a couple of minutes to build a barebones Ubuntu server for you.

Logging onto your Ubuntu server with SSH

  1. Once the server is setup you will be presented with a settings page. Copy the "Public IP" address.
  2. In a terminal window log into the remote server using SSH replacing the IP address in the examples below with your "Public IP" address.
    
    $ ssh root@212.47.246.30
    
    If, for any reason you changed the SSH key name from id_rsa remember to provide the path to it.
    
    $ ssh root@212.47.246.30 -i /Users/username/.ssh/scaleway_rsa.pub
    

Installing the Node.js

We first need to get the Node.js servers working. The Ubuntu OS does not have all the software we need so we start by installing Git, Node.js and PM2.

  1. Install Git onto the server - helpful for cloning Github repo's
    
    $ apt-get install git
    
  2. Install Node.js - you can find different version options at github.com/nodesource/distributions
    
    curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
    sudo apt-get install -y nodejs
    
  3. Install PM2 - this will run the Node.js apps for us
    
    $ npm install pm2 -g
    

Creating the test Node.js servers

We need to create two small test servers to check the Nginx configuration is correct.
  1. Move into the top most directory in the server and create an apps directory with two child directories.
    
    $ cd /
    $ md apps
    $ cd /apps
    $ md app1
    $ md app2
    
  2. Within each of the child directories create an app.js file and add the following code. IMPORTANT NOTE: in the app2 directory the port should be set to 3001
    
    const http = require("http");
    const port = 3000; //3000 for app1 and 3001 for app2
    const hostname = '0.0.0.0'
    
    http.createServer(function(reqst, resp) {
        resp.writeHead(200, {'Content-Type': 'text/plain'});
        resp.end('Hello World! ' + port);
    }).listen(port,hostname);
    console.log('Load on: ' + hostname + ':' + port);
    

NOTE You can use command line tools like VIM to create and edit files on your remote server, but I like to use Transmit which supports SFTP and can be used to view and edit files on your remote server. I use Transmit "Open with" feature to edit remote files in VS Code on my local machine.

Running the Node.js servers

Rather than running Node directly we will use PM2. It has two major advantages to running Node.js directly, first is PM2 daemon that keeps your app alive forever, reloading it when required. The second is that PM2 will manage Node's cluster features, running a Node instance on multiple cores, bringing them together to act as one service.

Within each of the app directories run

$ pm2 start app.js

The PM2 cheatsheet is useful to find other commands.

Once you have started both apps you can check they are running correctly by using the following command:


$ pm2 list
The result should look like this: Image of PM2 list output

At this point your Node.js servers should be visible to the world. Try http://x.x.x.x:3000 and http://x.x.x.x:3001 in your web browser, replacing x.x.x.x with your servers public IP address.

Installing and configuring Nginx

At this stage we need to register our web domains to using the public IP address provided for the Scaleway server. For this blog post I am going to use the examples alpha.glennjones.net and beta.glennjones.net

Install Nginx


$ apt-get install nginx

Once installed find the file ./etc/nginx/sites-available/default on the remote server and change its contents to match the code below. Swap out the server_name to match the domain names you wish to use.


server {
    server_name alpha.glennjones.net;

    location / {
        # Proxy_pass configuration
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_max_temp_file_size 0;
        proxy_pass http://0.0.0.0:3000;
        proxy_redirect off;
        proxy_read_timeout 240s;
    }
}

server {
    server_name beta.glennjones.net;

    location / {
        # Proxy_pass configuration
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $http_host;
        proxy_set_header X-NginX-Proxy true;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_max_temp_file_size 0;
        proxy_pass http://0.0.0.0:3001;
        proxy_redirect off;
        proxy_read_timeout 240s;
    }
}

Test that Nginx config has no errors by running:


$ nginx -t

Then start the Nginx proxy with:


$ systemctl start nginx
$ systemctl enable nginx

Nginx should now proxy your domains so in my case both http://alpha.glennjones.net and http://beta.glennjones.net would display the Hello world page of apps 1 and 2.

Install letsencrypt and enforcing SSL

We are going to install letsencrypt and enforcing SSL using Nginx rather than Node.js.

  1. We start by installing letsencrypt:

    
    $ apt-get install letsencrypt
    
  2. We need to stop Nginx while we configure letsencrypt:

    
    $ systemctl stop nginx
    
  3. Then we create the SSL certificates. You will need to do this for each server, so twice for our example:

    
    $ letsencrypt certonly --standalone
    

    Once the SSL certificates are created. You should be able to find them in ./etc/letsencrypt/live/

  4. We then need to update the file ./etc/nginx/sites-available/default to point at our new certificates

    
    server {
        listen 80;
        listen [::]:80 default_server ipv6only=on;
        return 301 https://$host$request_uri;
    }
    
    server {
        listen 443;
        server_name alpha.glennjones.net;
    
        ssl on;
        # Use certificate and key provided by Let's Encrypt:
        ssl_certificate /etc/letsencrypt/live/alpha.glennjones.net/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/alpha.glennjones.net/privkey.pem;
        ssl_session_timeout 5m;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
    
        location / {
            # Proxy_pass configuration
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_set_header X-NginX-Proxy true;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_max_temp_file_size 0;
            proxy_pass http://0.0.0.0:3000;
            proxy_redirect off;
            proxy_read_timeout 240s;
        }
    }
    
    server {
        listen 443;
        server_name beta.glennjones.net;
    
        ssl on;
        # Use certificate and key provided by Let's Encrypt:
        ssl_certificate /etc/letsencrypt/live/beta.glennjones.net/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/beta.glennjones.net/privkey.pem;
        ssl_session_timeout 5m;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
    
        location / {
            # Proxy_pass configuration
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_set_header X-NginX-Proxy true;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_max_temp_file_size 0;
            proxy_pass http://0.0.0.0:3001;
            proxy_redirect off;
            proxy_read_timeout 240s;
        }
    }
    
  5. Finally let's restart Nginx:

    
    $ systemctl restart nginx
    

    Nginx should now proxy your domains and enforce SSL so https://alpha.glennjones.net and https://beta.glennjones.net would display the Hello world page of apps 1 and 2.

Letsencrypt auto renewal

Let’s Encrypt certificates are valid for 90 days, but it’s recommended that you renew the certificates every 60 days to allow a margin of error.

You can trigger a renewal by using the following command:


$ letsencrypt renew

To create an auto renewal

  1. Edit the crontab, the following command will give editing options

    
    $ crontab -e
    
  2. Add the following two lines

    
    30 2 * * 1 /usr/bin/letsencrypt renew >> /var/log/le-renew.log
    35 2 * * 1 /bin/systemctl reload nginx
    

Resources

Some useful posts on this subject I used

  • node.js
  • nginx
  • letsencrypt
  • hosting

Fullstack 2015 notes

Listed below are some of my favourite sessions from Fullstack 2015 conference. This post is a collection of links for when I am talking to someone about that speaker or project I cannot remember. SkillMatter have published videos, unfortunately only for people with a logon. The talks where of a high standard and I enjoyed the event. The one sad thing about Fullstack was the lack of diversity, which really needs to be addressed before its put on again.

Scaling Node.js Applications with Microservices

by Armagan Amcalar - video

Distributed peer to peer architecture for building microservices with Node.js.

  • cote.js “An auto-discovery mesh network framework for building fault-tolerant and scalable applications”

WebRTC Reborn

Dan Jenkins - video, slides

Great talk on where WebRTC is after 3-4 years. Includes discussion of the use of signaling servers.

Debugging your Node.js applications - tools and advice

Stewart Addison - video

Although this was a general talk about debugging Node.js, the interest element was IBM’s work with appmetrics and appmetrics-elk. There was also a nice intro into using heapdumps and chrome.

  • appmetrics A tools for monitoring resource and performance data form Node.js-based applications
  • appmetrics-elk A dashboard built using the ELK stack (ElasticSearch, LogStash and Kibana) using visualising data from appmetrics

Workshop: PM2 to manage your micro service app & Keymetrics to monitor them

Alexandre Strzelewicz - video

This was a workshop on now to use PM2 to run your Node.js applications. The speaker also showed the power of Keymetrics, a very polished UI for monitoring resource and performance data from Node.js. Keymetrics has good alerting/notification features. The background issue with both PM2 functions and Keymetrics prices is it seems to run against the move to container systems like Docker.

  • PM2 Production process manager for Node.js applications with a built-in load balancer for processor cores.
  • Keymetrics Monitor and orchestrate multiple servers with the same dashboard, built on top of PM2 to grab statistics from your applications.

Chrome DevTools Deep-dive

Addy Osmani - video

A look at performance profiling, JavaScript debugging and animation inspection in Chrome’s dev tools.

The Web Is Getting Pushy

Phil Nash - video, sildes, code

A demo of push notifications the standard version and with service worker. The demo also had some great walk throughs of the code needed for push notifications.

We fail to follow SemVer – and why it needn’t matter

Stephan Bönnemann - code, twitter

Stephen talked about the need for semantic releases and his work taking forward ideas from Angular commit message guidelines to an new auto versioning system used by Hoodie.

Surviving Micro-services

Richard Rodger - video, slides, code

For me the best talk of the conference, on how message-oriented systems can fail. Richard covered common failure patterns and methods for their mitigation. Its also an introduction to Seneca.js.

The Javascript Database for Javascript Developers

Mark Nadal - video

Mark gave talk on Gun a realtime, peer-to-peer graph database. It works in the browser and on servers with some interesting sync capabilities. It can also be use as an offline database like Pouchdb.

Civilising Bluetooth Low Energy: bringing BLE devices to the web

Jonathan Austin

A really interesting talk about Bluetooth Low Energy devices and thier future. Demo of chrome interacting with Bluetooth device.

Workshop: Developing Micro-services

Peter Elger

This was a workshop to build and deploy an example microservice system. It utilize a number of technologies including Node.js, Mqtt, Docker and InfluxDB.

  • fullstack2015
  • node
  • iot
  • javascript

Building a simple Node.js server on scaleway

This tutorial will help build a simple hosted Node.js server using scaleway in just a few minutes. We are going to use a pre-built server image from scaleway and a Node project from a Github repo. I have written this post to remind myself, but hopefully it will be useful to others.

Setting up the account

  1. Create an account on scaleway.com – you will need a credit card.
  2. Create and enable SSH Keys on your local computer, scaleway have provided a good tutorial https://www.scaleway.com/docs/configure-new-ssh-key/. It's easier than it first sounds.

Setting up the server

Scaleway provides a number of server images ready for you to use. The Node.js images is a little bit dated so we will use the latest Ubuntu image and add Node.js later.

screenshot of scaleway dashborad
  1. Within the scaleway dashboard navigate to the "Servers" tab and click "Create Server".
    1. Give the server a name.
    2. In the "Choose an image" section select the "Distributions" tab, page through the options until you can select the latest Ubuntu, currently "Ubuntu Vivid (15.04 latest)".
    3. Finally click the "Create Server" button.

It takes a couple of minutes to build a barebones Ubuntu server for you.

Logging onto your server with SSH

  1. Once the server is setup you will be presented with a setting page. Copy the "Public IP" address.
  2. In a terminal window log into the remote server using SSH replacing the IP address in the examples below with your "Public IP" address.
    
    $ ssh root@212.47.246.30
    
    If, for any reason you changed the SSH key name from id_rsa remember to provide the path to it.
    
    $ ssh root@212.47.246.30 -i /Users/username/.ssh/scaleway_rsa.pub
    

Adding Git, Node and your GitHub project onto the server

  1. Move into the top most directory in the server
    
    $ cd /
    
  2. Install Git onto the server
    
    $ apt-get install git
    
  3. Install Node.js - you can find different version options at github.com/nodesource/distributions
    
    curl -sL https://deb.nodesource.com/setup_4.x | sudo -E bash -
    sudo apt-get install -y nodejs
    
    apt-get install nodejs
    
  4. Install PM2 - this will run our Node app for us
    
    $ npm install pm2 -g
    
  5. Clone your Node repo from GitHub. The space and word "app" at the end of the command tells git to place the project contents into a new directory called "app"
    
    $ git clone https://github.com/glennjones/hapi-bootstrap.git app
    
  6. Move into the new app directory
    
    $ cd app
    
  7. Install Node packages
    
    $ npm install
    
  8. Setup the basic environmental properties. Note your app needs to use process.env.HOST and process.env.PORT to set its network connection
    
    $ export PORT=80
    $ export HOST=0.0.0.0
    $ export NODE_ENV=production
    

Running Node server

Rather than running Node directly we will use PM2. It has two major advantagaes to running Node directly, first is PM2 deamon that keeps your app alive forever, reloading it when required. The second is that PM2 will manage Node's cluster features, running a Node instance on multiple cores, bringing them together to act as one service.


$ pm2 start app.js -i 0

The -i 0 flag tells PM2 to run in cluster mode and use as many cores as it can find. The PM2 cheatsheet is useful to find other commands.

For feedback on the state of the server

  • List - $ pm2 list
  • CPU / Memory Monitoring - $ pm2 monit
screenshot PM2 list of clustered apps

View your running app in the browser

View your app from the "Public IP" in any browser

 

 

 

Other useful information

Updating app using Git pull

If you wish to update your app just log on to the server using SSH and use git pull
  1. SSH into server - $ ssh root@212.47.246.30
  2. Move into new app directory - $ cd /app
  3. Git pull latest code - $ git pull

Using a private GitHub repo

There are a few ways to pull a private GitHub repo using SSH keys from a remote server. I use the "Personal access token". You can generate these tokens in the settings area of GitHub and use git clone with this URL structure:

$ git clone https://{token}@github.com/glennjones/glennjones-net.git/ app

Saving server setup for later use

You can setup your own images based on your running server for easy deployment by using snapshot and image features of the service. I did this to create a basic Ubuntu 15 and Node 4 image for this project.

Performance

You will often hear Scaleway C1 servers being compared to the Raspberry Pi 2 in terms of spec, which makes you think it maybe slow. Using all 4 cores they seem perfectly fine for my personal projects. I have yet to find a good set of benchmarks against other services, which would be useful. This blog is running on scaleway as of (2015-09-30).

Mongodb Issue

Scaleway C1 servers and Mongodb do not play very well together. There is a version you can get to work, but it's not well supported or considered production level.

Solutions maybe to move db operations onto dedibox by the same company or use one of the Mongodb hosting services and live with some network latency.

More complex deployments

You can deploy much more complex setups with multiple Node servers, load balancers etc. I am very interested in trying out the Electric Search. At the moment building all this into images that you can deploy is quite complex for none devops people.

I am still playing with this service but so far its proved interesting and I have moved at least 5 of my small personal sites over.

  • scaleway node deployment

The problem with window.URL

I was working on a library where I needed to resolve a relative URL using a base URL with JavaScript in the browser. After googling I found the relatively new window.URL API documented on MDN. It allows you to create a URL object that returns a resolved URL, which was great, just what I needed!


var url = new URL('/test.htm', 'http://example.com');
console.log( url.toString() );  
// output is:  http://example.com/test.htm

As with all new API’s I don’t expect all the browsers to support them straight away, but that’s OK I can test for the existence of APIs such as window.URL and use a fallback or polyfill. The code should look something like this.


if(window.URL){
	var url = new URL('/test.htm', 'http://example.com');
	console.log( url.toString() );
else{
	// do something else – fallback or polyfill
}

In this case it does not work because the URL API is actually two overlapping specifications and designed in such a way to allow for one to break the other if they are not implemented together. This is because they both lay claim to the window.URL object. The two specifications are:

The problem

The IE team have implemented just the File API specification, which is not wrong in itself, but if you try use the URL object in IE10 as specified in the URL API specification it throws an error and stops the code execution. Testing for window.URL will not help you as it does exist in IE10 and IE11.

I don’t think IE throwing the error is the issue. Its more of a case of how did the wider web community design APIs that end up clashing?

The workaround

You need to write something like this:


// test URL object for parse support
function hasURLParseSupport (){
	try{
		var url = new URL('/test.htm', 'http://example.com');
		return true;
	}catch(e){
		return false;
	}
}

if(hasParseURLSupport()){
	var url = new URL('/test.htm', 'http://example.com');
	console.log( url.toString() );
}else{
	// do something else
}  

Not meeting expectation’s

Having done a bit of UX design in my time I would class this issue of one of not meeting developers expectations. At least to me using a try/catch block is not how I expect to have to deal with the implementation of new APIs. This should have been resolved at the API design stage before they became public in the browsers.

Wasting time

It took me quite a bit of research to work out what the issue was (2-3 hours) and come up with a small workaround I could trust. Mainly because I had to start reading the specifications and running browser tests.

Going forward

  • As IE10 will be with us for many years the only practical way to use the URL object will be to use more complex tests like the hasURLParseSupport above.
  • This issue should be recognized in the two specifications that lay claim over the URL object.
  • The places that provide documentation of the APIs like MDN should discuss the issue and provide code examples of workarounds.
  • The W3Ctag should review overlapping specifications in fine detail to provide advice on implementation at this level.

I may have got the above wrong

I would just like to point out I am just a web developer and not part of the specifications community that created URL APIs. It’s possible that I may have missed some element of the specifications that deals with this issue. I do not know about the whole history of the URL object and all the discussions of its design. Please let me know if any of the above is incorrect of or can be added too and I will change the post.

Links

  • url javascript api

The shoebox - a manifesto for transmat.io

Over the last couple of years I have been thinking about how we view each other through the web, the content we post and the interactions we have. Like many people, I am not happy with the path Facebook and Twitter are taking us on.

Rather than just get upset at the every increasing monoculture that is slowly distorting the early promise of the web. I want to help build a new type of relationship that we can all have with the web and these sites. These are the ideas that have directed me.

Shoeboxes and mementoes

When I was a child, my brothers and I all had a shoebox each. In these we kept our mementoes. A seashell from a summer holiday where I played for hours in the rock pools, the marble from the schoolyard victory against a bully and a lot of other objects that told a story.

We all collect mementoes of some sort, they are the stories that we use to define ourselves. These stories are not static things built in stone, but a living part of us. We often reshape them through time as our view of life changes or our memories colour them. They are also used in different combinations to weave the way we present ourselves to the different groups of people in our lives.

The webs idea of memory is not humanistic

Today large amounts of humanities online content is published on social networks. These sites seem to consider every bit of content/interaction to have the same level of importance. In a recent talk Maciej Ceglowski said:

The Internet somehow contrives to remember and too much and too little at the same time, and it maps poorly on our concepts of how memory should work
Maciej Ceglowski, beyond tellerrand 2014

The incumbent social networks seem unable to forget a single piece of content we give them, as it is required to power their business model of profiling us for advertising. At the same time we don’t have enough control within these spaces to curate our stories. To be able to group together those things that are important to us and subdue those that are not. I would like to build publishing environments that map to the way we keep mementoes and tell stories.

My presence on the web needs a new foundation

For me to rebuild a presence on the web I need to take back control of how I create, collect, distribute, store and publish my content. Only then can I curate a more personalized representation of myself.

Transmat

I have been working with a small group to build transmat.io, it:

  1. Collects digital content for publishing, it’s not just a file store.
  2. Has a content creation and collection interface that is designed for mobile first and for social content such as status updates and check-ins.
  3. Simple distribution tools that link pre-existing services, allowing you to control your own content while still being part of the conversation on social networks.
  4. Imports pre-existing archives from Twitter, etc.
  5. Provides a simple API allowing the reuse of content i.e. in blogs.
  6. Provides auto backup in HTML for longevity.

Transmat is just a foundation on which I am going rebuild my blog. It’s not a story telling tool in itself, it’s more like my online memory from which I can pull and weave content from.

I also hope Transmat and all its tools will help others collect and reuse their digital content so they can weave their own stories. If you’re interested why not sign up for an invite. https://transmat.io

  • transmat
  • indieweb
  • indiewebcamp

Data formats:

API