Scroll to navigation

PYTHON-ENGINEIO(1) python-engineio PYTHON-ENGINEIO(1)

NAME

python-engineio - python-engineio Documentation

This project implements an Engine.IO server that can run standalone or integrated with a variety of Python web frameworks.

GETTING STARTED

What is Engine.IO?

Engine.IO is a lightweight transport protocol that enables real-time bidirectional event-based communication between clients (typically web browsers) and a server. The official implementations of the client and server components are written in JavaScript.

The Engine.IO protocol is extremely simple. The example that follows shows the client-side Javascript code required to setup an Engine.IO connection to a server:

var socket = eio('http://chat.example.com');
socket.on('open', function() { alert('connected'); });
socket.on('message', function(data) { alert(data); });
socket.on('close', function() { alert('disconnected'); });
socket.send('Hello from the client!');


Features

  • Fully compatible with the Javascript engine.io-client library, and with other Engine.IO clients.
  • Compatible with Python 2.7 and Python 3.3+.
  • Supports large number of clients even on modest hardware due to being asynchronous.
  • Compatible with aiohttp, sanic, tornado, eventlet, gevent, or any WSGI or ASGI compatible server.
  • Includes WSGI and ASGI middlewares that integrate Engine.IO traffic with other web applications.
  • Uses an event-based architecture implemented with decorators that hides the details of the protocol.
  • Implements HTTP long-polling and WebSocket transports.
  • Supports XHR2 and XHR browsers as clients.
  • Supports text and binary messages.
  • Supports gzip and deflate HTTP compression.
  • Configurable CORS responses to avoid cross-origin problems with browsers.

Examples

The following application is a basic example that uses the Eventlet asynchronous server and includes a small Flask application that serves the HTML/Javascript to the client:

import engineio
import eventlet
from flask import Flask, render_template
eio = engineio.Server()
app = Flask(__name__)
@app.route('/')
def index():
    """Serve the client-side application."""
    return render_template('index.html')
@eio.on('connect')
def connect(sid, environ):
    print("connect ", sid)
@eio.on('message')
def message(sid, data):
    print("message ", data)
    eio.send(sid, 'reply')
@eio.on('disconnect')
def disconnect(sid):
    print('disconnect ', sid)
if __name__ == '__main__':
    # wrap Flask application with engineio's middleware
    app = engineio.Middleware(eio, app)
    # deploy as an eventlet WSGI server
    eventlet.wsgi.server(eventlet.listen(('', 8000)), app)


Below is a similar application, coded for asyncio (Python 3.5+ only) with the aiohttp framework:

from aiohttp import web
import engineio
eio = engineio.AsyncServer()
app = web.Application()
# attach the Engine.IO server to the application
eio.attach(app)
async def index(request):
    """Serve the client-side application."""
    with open('index.html') as f:
        return web.Response(text=f.read(), content_type='text/html')
@eio.on('connect')
def connect(sid, environ):
    print("connect ", sid)
@eio.on('message')
async def message(sid, data):
    print("message ", data)
    await eio.send(sid, 'reply')
@eio.on('disconnect')
def disconnect(sid):
    print('disconnect ', sid)
app.router.add_static('/static', 'static')
app.router.add_get('/', index)
if __name__ == '__main__':
    # run the aiohttp application
    web.run_app(app)


The client-side application must include the engine.io-client library (version 1.5.0 or newer recommended).

Each time a client connects to the server the connect event handler is invoked with the sid (session ID) assigned to the connection and the WSGI environment dictionary. The server can inspect authentication or other headers to decide if the client is allowed to connect. To reject a client the handler must return False.

When the client sends a message to the server the message event handler is invoked with the sid and the message.

Finally, when the connection is broken, the disconnect event is called, allowing the application to perform cleanup.

Because Engine.IO is a bidirectional protocol, the server can send messages to any connected client at any time. The engineio.Server.send() method takes the client's sid and the message payload, which can be of type str, bytes, list or dict (the last two are JSON encoded).

DEPLOYMENT

The following sections describe a variety of deployment strategies for Engine.IO servers.

aiohttp

aiohttp provides a framework with support for HTTP and WebSocket, based on asyncio. Support for this framework is limited to Python 3.5 and newer.

Instances of class engineio.AsyncServer will automatically use aiohttp for asynchronous operations if the library is installed. To request its use explicitly, the async_mode option can be given in the constructor:

eio = engineio.AsyncServer(async_mode='aiohttp')


A server configured for aiohttp must be attached to an existing application:

app = web.Application()
eio.attach(app)


The aiohttp application can define regular routes that will coexist with the Engine.IO server. A typical pattern is to add routes that serve a client application and any associated static files.

The aiohttp application is then executed in the usual manner:

if __name__ == '__main__':
    web.run_app(app)


Tornado

Tornado is a web framework with support for HTTP and WebSocket. Support for this framework requires Python 3.5 and newer. Only Tornado version 5 and newer are supported, thanks to its tight integration with asyncio.

Instances of class engineio.AsyncServer will automatically use tornado for asynchronous operations if the library is installed. To request its use explicitly, the async_mode option can be given in the constructor:

eio = engineio.AsyncServer(async_mode='tornado')


A server configured for tornado must include a request handler for Engine.IO:

app = tornado.web.Application(
    [
        (r"/engine.io/", engineio.get_tornado_handler(eio)),
    ],
    # ... other application options
)


The tornado application can define other routes that will coexist with the Engine.IO server. A typical pattern is to add routes that serve a client application and any associated static files.

The tornado application is then executed in the usual manner:

app.listen(port)
tornado.ioloop.IOLoop.current().start()


Sanic

Sanic is a very efficient asynchronous web server for Python 3.5 and newer.

Instances of class engineio.AsyncServer will automatically use Sanic for asynchronous operations if the framework is installed. To request its use explicitly, the async_mode option can be given in the constructor:

eio = engineio.AsyncServer(async_mode='sanic')


A server configured for Sanic must be attached to an existing application:

app = Sanic()
eio.attach(app)


The Sanic application can define regular routes that will coexist with the Engine.IO server. A typical pattern is to add routes that serve a client application and any associated static files to this application.

The Sanic application is then executed in the usual manner:

if __name__ == '__main__':
    app.run()


Uvicorn, Daphne, and other ASGI servers

The engineio.ASGIApp class is an ASGI compatible application that can forward Engine.IO traffic to an engineio.AsyncServer instance:

eio = engineio.AsyncServer(async_mode='asgi')
app = engineio.ASGIApp(eio)


The application can then be deployed with any ASGI compatible web server.

Eventlet

Eventlet is a high performance concurrent networking library for Python 2 and 3 that uses coroutines, enabling code to be written in the same style used with the blocking standard library functions. An Engine.IO server deployed with eventlet has access to the long-polling and WebSocket transports.

Instances of class engineio.Server will automatically use eventlet for asynchronous operations if the library is installed. To request its use explicitly, the async_mode option can be given in the constructor:

eio = engineio.Server(async_mode='eventlet')


A server configured for eventlet is deployed as a regular WSGI application, using the provided engineio.Middleware:

app = engineio.Middleware(eio)
import eventlet
eventlet.wsgi.server(eventlet.listen(('', 8000)), app)


Using Gunicorn with Eventlet

An alternative to running the eventlet WSGI server as above is to use gunicorn, a fully featured pure Python web server. The command to launch the application under gunicorn is shown below:

$ gunicorn -k eventlet -w 1 module:app


Due to limitations in its load balancing algorithm, gunicorn can only be used with one worker process, so the -w 1 option is required. Note that a single eventlet worker can handle a large number of concurrent clients.

Another limitation when using gunicorn is that the WebSocket transport is not available, because this transport it requires extensions to the WSGI standard.

Note: Eventlet provides a monkey_patch() function that replaces all the blocking functions in the standard library with equivalent asynchronous versions. While python-engineio does not require monkey patching, other libraries such as database drivers are likely to require it.

Gevent

Gevent is another asynchronous framework based on coroutines, very similar to eventlet. An Engine.IO server deployed with gevent has access to the long-polling transport. If project gevent-websocket is installed, the WebSocket transport is also available. Note that when using the uWSGI server, the native WebSocket implementation of uWSGI can be used instead of gevent-websocket (see next section for details on this).

Instances of class engineio.Server will automatically use gevent for asynchronous operations if the library is installed and eventlet is not installed. To request gevent to be selected explicitly, the async_mode option can be given in the constructor:

# gevent alone or with gevent-websocket
eio = engineio.Server(async_mode='gevent')


A server configured for gevent is deployed as a regular WSGI application, using the provided engineio.Middleware:

from gevent import pywsgi
app = engineio.Middleware(eio)
pywsgi.WSGIServer(('', 8000), app).serve_forever()


If the WebSocket transport is installed, then the server must be started as follows:

from gevent import pywsgi
from geventwebsocket.handler import WebSocketHandler
app = engineio.Middleware(eio)
pywsgi.WSGIServer(('', 8000), app,
                  handler_class=WebSocketHandler).serve_forever()


Using Gunicorn with Gevent

An alternative to running the gevent WSGI server as above is to use gunicorn, a fully featured pure Python web server. The command to launch the application under gunicorn is shown below:

$ gunicorn -k gevent -w 1 module:app


Or to include WebSocket:

$ gunicorn -k geventwebsocket.gunicorn.workers.GeventWebSocketWorker -w 1 module: app


Same as with eventlet, due to limitations in its load balancing algorithm, gunicorn can only be used with one worker process, so the -w 1 option is required. Note that a single gevent worker can handle a large number of concurrent clients.

Note: Gevent provides a monkey_patch() function that replaces all the blocking functions in the standard library with equivalent asynchronous versions. While python-engineio does not require monkey patching, other libraries such as database drivers are likely to require it.

uWSGI

When using the uWSGI server in combination with gevent, the Engine.IO server can take advantage of uWSGI's native WebSocket support.

Instances of class engineio.Server will automatically use this option for asynchronous operations if both gevent and uWSGI are installed and eventlet is not installed. To request this asynchoronous mode explicitly, the async_mode option can be given in the constructor:

# gevent with uWSGI
eio = engineio.Server(async_mode='gevent_uwsgi')


A complete explanation of the configuration and usage of the uWSGI server is beyond the scope of this documentation. The uWSGI server is a fairly complex package that provides a large and comprehensive set of options. It must be compiled with WebSocket and SSL support for the WebSocket transport to be available. As way of an introduction, the following command starts a uWSGI server for the latency.py example on port 5000:

$ uwsgi --http :5000 --gevent 1000 --http-websockets --master --wsgi-file latency.py --callable app


Standard Threads

While not comparable to eventlet and gevent in terms of performance, the Engine.IO server can also be configured to work with multi-threaded web servers that use standard Python threads. This is an ideal setup to use with development servers such as Werkzeug. Only the long-polling transport is currently available when using standard threads.

Instances of class engineio.Server will automatically use the threading mode if neither eventlet nor gevent are not installed. To request the threading mode explicitly, the async_mode option can be given in the constructor:

eio = engineio.Server(async_mode='threading')


A server configured for threading is deployed as a regular web application, using any WSGI complaint multi-threaded server. The example below deploys an Engine.IO application combined with a Flask web application, using Flask's development web server based on Werkzeug:

eio = engineio.Server(async_mode='threading')
app = Flask(__name__)
app.wsgi_app = engineio.Middleware(eio, app.wsgi_app)
# ... Engine.IO and Flask handler functions ...
if __name__ == '__main__':
    app.run(threaded=True)


When using the threading mode, it is important to ensure that the WSGI server can handle multiple concurrent requests using threads, since a client can have up to two outstanding requests at any given time. The Werkzeug server is single-threaded by default, so the threaded=True option is required.

Note that servers that use worker processes instead of threads, such as gunicorn, do not support an Engine.IO server configured in threading mode.

Scalability Notes

Engine.IO is a stateful protocol, which makes horizontal scaling more difficult. To deploy a cluster of Engine.IO processes hosted on one or multiple servers the following conditions must be met:
  • Each Engine.IO server process must be able to handle multiple requests concurrently. This is required because long-polling clients send two requests in parallel. Worker processes that can only handle one request at a time are not supported.
  • The load balancer must be configured to always forward requests from a client to the same process. Load balancers call this sticky sessions, or session affinity.

API REFERENCE

Server class

class engineio.Server(async_mode=None, ping_timeout=60, ping_interval=25, max_http_buffer_size=100000000, allow_upgrades=True, http_compression=True, compression_threshold=1024, cookie='io', cors_allowed_origins=None, cors_credentials=True, logger=False, json=None, async_handlers=True, monitor_clients=None, **kwargs)
An Engine.IO server.

This class implements a fully compliant Engine.IO web server with support for websocket and long-polling transports.

Parameters
  • async_mode -- The asynchronous model to use. See the Deployment section in the documentation for a description of the available options. Valid async modes are "threading", "eventlet", "gevent" and "gevent_uwsgi". If this argument is not given, "eventlet" is tried first, then "gevent_uwsgi", then "gevent", and finally "threading". The first async mode that has all its dependencies installed is then one that is chosen.
  • ping_timeout -- The time in seconds that the client waits for the server to respond before disconnecting. The default is 60 seconds.
  • ping_interval -- The interval in seconds at which the client pings the server. The default is 25 seconds.
  • max_http_buffer_size -- The maximum size of a message when using the polling transport. The default is 100,000,000 bytes.
  • allow_upgrades -- Whether to allow transport upgrades or not. The default is True.
  • http_compression -- Whether to compress packages when using the polling transport. The default is True.
  • compression_threshold -- Only compress messages when their byte size is greater than this value. The default is 1024 bytes.
  • cookie -- Name of the HTTP cookie that contains the client session id. If set to None, a cookie is not sent to the client. The default is 'io'.
  • cors_allowed_origins -- Origin or list of origins that are allowed to connect to this server. All origins are allowed by default, which is equivalent to setting this argument to '*'.
  • cors_credentials -- Whether credentials (cookies, authentication) are allowed in requests to this server. The default is True.
  • logger -- To enable logging set to True or pass a logger object to use. To disable logging set to False. The default is False.
  • json -- An alternative json module to use for encoding and decoding packets. Custom json modules must have dumps and loads functions that are compatible with the standard library versions.
  • async_handlers -- If set to True, run message event handlers in non-blocking threads. To run handlers synchronously, set to False. The default is True.
  • monitor_clients -- If set to True, a background task will ensure inactive clients are closed. Set to False to disable the monitoring task (not recommended). The default is True.
  • kwargs -- Reserved for future extensions, any additional parameters given as keyword arguments will be silently ignored.


disconnect(sid=None)
Disconnect a client.
Parameters
sid -- The session id of the client to close. If this parameter is not given, then all clients are closed.


handle_request(environ, start_response)
Handle an HTTP request from the client.

This is the entry point of the Engine.IO application, using the same interface as a WSGI application. For the typical usage, this function is invoked by the Middleware instance, but it can be invoked directly when the middleware is not used.

Parameters
  • environ -- The WSGI environment.
  • start_response -- The WSGI start_response function.


This function returns the HTTP response body to deliver to the client as a byte sequence.


on(event, handler=None)
Register an event handler.
Parameters
  • event -- The event name. Can be 'connect', 'message' or 'disconnect'.
  • handler -- The function that should be invoked to handle the event. When this parameter is not given, the method acts as a decorator for the handler function.


Example usage:

# as a decorator:
@eio.on('connect')
def connect_handler(sid, environ):
    print('Connection request')
    if environ['REMOTE_ADDR'] in blacklisted:
        return False  # reject
# as a method:
def message_handler(sid, msg):
    print('Received message: ', msg)
    eio.send(sid, 'response')
eio.on('message', message_handler)


The handler function receives the sid (session ID) for the client as first argument. The 'connect' event handler receives the WSGI environment as a second argument, and can return False to reject the connection. The 'message' handler receives the message payload as a second argument. The 'disconnect' handler does not take a second argument.


send(sid, data, binary=None)
Send a message to a client.
Parameters
  • sid -- The session id of the recipient client.
  • data -- The data to send to the client. Data can be of type str, bytes, list or dict. If a list or dict, the data will be serialized as JSON.
  • binary -- True to send packet as binary, False to send as text. If not given, unicode (Python 2) and str (Python 3) are sent as text, and str (Python 2) and bytes (Python 3) are sent as binary.



sleep(seconds=0)
Sleep for the requested amount of time using the appropriate async model.

This is a utility function that applications can use to put a task to sleep without having to worry about using the correct call for the selected async mode.


start_background_task(target, *args, **kwargs)
Start a background task using the appropriate async model.

This is a utility function that applications can use to start a background task using the method that is compatible with the selected async mode.

Parameters
  • target -- the target function to execute.
  • args -- arguments to pass to the function.
  • kwargs -- keyword arguments to pass to the function.


This function returns an object compatible with the Thread class in the Python standard library. The start() method on this object is already called by this function.


transport(sid)
Return the name of the transport used by the client.

The two possible values returned by this function are 'polling' and 'websocket'.

Parameters
sid -- The session of the client.



AsyncServer class

class engineio.AsyncServer(async_mode=None, ping_timeout=60, ping_interval=25, max_http_buffer_size=100000000, allow_upgrades=True, http_compression=True, compression_threshold=1024, cookie='io', cors_allowed_origins=None, cors_credentials=True, logger=False, json=None, async_handlers=True, monitor_clients=None, **kwargs)
An Engine.IO server for asyncio.

This class implements a fully compliant Engine.IO web server with support for websocket and long-polling transports, compatible with the asyncio framework on Python 3.5 or newer.

Parameters
  • async_mode -- The asynchronous model to use. See the Deployment section in the documentation for a description of the available options. Valid async modes are "aiohttp", "sanic", "tornado" and "asgi". If this argument is not given, an async mode is chosen based on the installed packages.
  • ping_timeout -- The time in seconds that the client waits for the server to respond before disconnecting.
  • ping_interval -- The interval in seconds at which the client pings the server.
  • max_http_buffer_size -- The maximum size of a message when using the polling transport.
  • allow_upgrades -- Whether to allow transport upgrades or not.
  • http_compression -- Whether to compress packages when using the polling transport.
  • compression_threshold -- Only compress messages when their byte size is greater than this value.
  • cookie -- Name of the HTTP cookie that contains the client session id. If set to None, a cookie is not sent to the client.
  • cors_allowed_origins -- List of origins that are allowed to connect to this server. All origins are allowed by default.
  • cors_credentials -- Whether credentials (cookies, authentication) are allowed in requests to this server.
  • logger -- To enable logging set to True or pass a logger object to use. To disable logging set to False.
  • json -- An alternative json module to use for encoding and decoding packets. Custom json modules must have dumps and loads functions that are compatible with the standard library versions.
  • async_handlers -- If set to True, run message event handlers in non-blocking threads. To run handlers synchronously, set to False. The default is True.
  • kwargs -- Reserved for future extensions, any additional parameters given as keyword arguments will be silently ignored.


attach(app, engineio_path='engine.io')
Attach the Engine.IO server to an application.

disconnect(sid=None)
Disconnect a client.
Parameters
sid -- The session id of the client to close. If this parameter is not given, then all clients are closed.

Note: this method is a coroutine.


handle_request(*args, **kwargs)
Handle an HTTP request from the client.

This is the entry point of the Engine.IO application. This function returns the HTTP response to deliver to the client.

Note: this method is a coroutine.


on(event, handler=None)
Register an event handler.
Parameters
  • event -- The event name. Can be 'connect', 'message' or 'disconnect'.
  • handler -- The function that should be invoked to handle the event. When this parameter is not given, the method acts as a decorator for the handler function.


Example usage:

# as a decorator:
@eio.on('connect')
def connect_handler(sid, environ):
    print('Connection request')
    if environ['REMOTE_ADDR'] in blacklisted:
        return False  # reject
# as a method:
def message_handler(sid, msg):
    print('Received message: ', msg)
    eio.send(sid, 'response')
eio.on('message', message_handler)


The handler function receives the sid (session ID) for the client as first argument. The 'connect' event handler receives the WSGI environment as a second argument, and can return False to reject the connection. The 'message' handler receives the message payload as a second argument. The 'disconnect' handler does not take a second argument.


send(sid, data, binary=None)
Send a message to a client.
Parameters
  • sid -- The session id of the recipient client.
  • data -- The data to send to the client. Data can be of type str, bytes, list or dict. If a list or dict, the data will be serialized as JSON.
  • binary -- True to send packet as binary, False to send as text. If not given, unicode (Python 2) and str (Python 3) are sent as text, and str (Python 2) and bytes (Python 3) are sent as binary.


Note: this method is a coroutine.


sleep(seconds=0)
Sleep for the requested amount of time using the appropriate async model.

This is a utility function that applications can use to put a task to sleep without having to worry about using the correct call for the selected async mode.

Note: this method is a coroutine.


start_background_task(target, *args, **kwargs)
Start a background task using the appropriate async model.

This is a utility function that applications can use to start a background task using the method that is compatible with the selected async mode.

Parameters
  • target -- the target function to execute.
  • args -- arguments to pass to the function.
  • kwargs -- keyword arguments to pass to the function.


The return value is a asyncio.Task object.


transport(sid)
Return the name of the transport used by the client.

The two possible values returned by this function are 'polling' and 'websocket'.

Parameters
sid -- The session of the client.



WSGIApp class

class engineio.WSGIApp(engineio_app, wsgi_app=None, static_files=None, engineio_path='engine.io')
WSGI application middleware for Engine.IO.

This middleware dispatches traffic to an Engine.IO application, and optionally forwards regular HTTP traffic to a WSGI application, or serve a list of predefined static files to clients.

Parameters
  • engineio_app -- The Engine.IO server.
  • wsgi_app -- The WSGI app that receives all other traffic.
  • static_files -- A dictionary where the keys are URLs that should be served as static files. For each URL, the value is a dictionary with content_type and filename keys. This option is intended to be used for serving client files during development.
  • engineio_path -- The endpoint where the Engine.IO application should be installed. The default value is appropriate for most cases.


Example usage:

import engineio
import eventlet
eio = engineio.Server()
app = engineio.WSGIApp(eio, static_files={
    '/': {'content_type': 'text/html', 'filename': 'index.html'},
    '/index.html': {'content_type': 'text/html',
                    'filename': 'index.html'},
})
eventlet.wsgi.server(eventlet.listen(('', 8000)), app)



ASGIApp class

class engineio.ASGIApp(engineio_server, other_asgi_app=None, static_files=None, engineio_path='engine.io')
ASGI application middleware for Engine.IO.

This middleware dispatches traffic to an Engine.IO application, and optionally serve a list of static files to the client or forward regular HTTP traffic to another ASGI application.

Parameters
  • engineio_server -- The Engine.IO server.
  • static_files -- A dictionary where the keys are URLs that should be served as static files. For each URL, the value is a dictionary with content_type and filename keys. This option is intended to be used for serving client files during development.
  • other_asgi_app -- A separate ASGI app that receives all other traffic.
  • engineio_path -- The endpoint where the Engine.IO application should be installed. The default value is appropriate for most cases.


Example usage:

import engineio
import uvicorn
eio = engineio.Server()
app = engineio.ASGIApp(eio, static_files={
    '/': {'content_type': 'text/html', 'filename': 'index.html'},
    '/index.html': {'content_type': 'text/html',
                    'filename': 'index.html'},
})
uvicorn.run(app, '127.0.0.1', 5000)



Middleware class (deprecated)

class engineio.Middleware(engineio_app, wsgi_app=None, static_files=None, engineio_path='engine.io')
This class has been renamed to WSGIApp and is now deprecated.

  • genindex
  • modindex
  • search

AUTHOR

Miguel Grinberg

COPYRIGHT

2018, Miguel Grinberg
November 26, 2018