High Performance Computing (HPC)

DTS provides a set of HPC services for LSE researchers called Fabian.

Fabian's overall aim is to provide a high quality central service for researchers to carry out compute-intensive tasks and the processing of large datasets with ease.

Fabian provides a High Performance Computing (HPC) Cluster with the following key features:

  • A mixture of compute nodes providing high core counts and large memory on which users can run applications.
  • High capacity local disk storage on which to work with large datasets and to process results.
  • A suite of applications optimised for the high performance computing hardware.
  • High throughput job scheduler configured to manage users workloads.
  • High speed connections into the LSE networks, LSE research systems and other academic networks.
  • A number of means of access from command line interfaces through to the web with graphical and desktop access.

Available for all users to:

  • Use a variety of applications.
  • Build their own bespoke applications and custom application modules.

For more information about the services, getting access or training materials, please see the following:

Getting started and training 

Getting started guidance

We have produced new set of guides for users to 'Get Started' using Fabian in a Moodle course, open to anyone at the LSE to enrol on.

Fabian Moodle Course at https://moodle.lse.ac.uk/course/view.php?name=fabian

There are many things required that we have yet to write and we will provide worked examples for using particular applications within Fabian. If you have a particular need please contact fabian@lse.ac.uk or use the Suggestions Forums within the course.

Whilst the original course slides provided by third-party Alces are still availble to LSE researchers, we intended to remove them once the new guides provide all the information required and will provide PDF versions of the guide from the moodle site here.

Fabian workshops

We organise and run workshops usually for groups of 12-30 users. These workshops are scheduled to run once sufficient interest has been recieved and can be requested by anyone.

The workshop takes just under two hours and takes the form of a presentation followed by discussion of your needs and then hands on Getting Started help. Users are recommended to bring a laptop. Users should leave the workshop with a clear understanding of the system and the ability to login and get started.

To request a workshop contact fabian@lse.ac.uk

Request an account

To set up your Fabian user account, please email the HPC Support Team, with the following information:

  • Name, LSE Network Username, Department/School, your Department Role 
  • Your area of Research and a brief description of what you intend to use Fabian for.
  • An outline of your HPC Requirements, such as the applications you intend to run, amount of data you will be working with. 

Upcoming services

We have a number of services that will pilot in the next few months and become available to all users once the terms of the service are agreed and testing is complete.

Database Service (DBaaS)

We will deliver a Database service initially solely for use in the HPC environment. The first database product to be used will be Oracle 12c as the school has a campus license that we can use at no cost.

If you are interested in becoming an early adopter of this service and shaping the form it will take please contact: fabian@lse.ac.uk.

Git Version Control

In order that researchers can share and control their code a centrally managed gitlab service will be installed.

This service has been used sucessfully across the school both in a research and administration context and will benefit researchers by providing both history of changes and easier deployment and sharing of code.

Hardware configuration 

The Fabian Cluster service provides access to

  • 15 nodes totaling 360 CPU cores

  • 2TByte memory across the cluster (128GByte per node)

  • Soon there will be nodes providing 1TByte memory 

provided by the following hardware arrangement

Hardware Configuration


Login Node

Provides access to the compute/storage and scheduler through various command line and graphic interfaces.

HPE Proliant DL80 2U server

  • 8 CPU Cores
  • 14GByte memory 

15 Compute Nodes

 Running user applications

HPE Apollo 6000

  • Intel v3 Haswell processors
  • 24 CPU cores
  • 128GByte memory
  • 1 TB local storage


2 Large Compute Nodes

 Running user applications

HPE Apollo 6000

  • Intel v4 Haswell processors
  • 28 CPU cores
  • 1 TByte memory
  • 1 TB local storage


For your data while you are working on fabian whether in file, block or object

HPE Proliant DL80 2U server

  • 96TByte shared storage

2 Database Servers

Provide a relational database service

HPE BL460c Blade Servers

  • Intel v4 Broadwell processors
  • 128GByte memory
  • scalable SAN storage


Software supported

Fabian uses a modules environment to allow users to choose between and run multiple version of software and the libraries they depend on.

The Fabian environment aims to offer a range of standard software as used here at the LSE. The current list includes the following:-

Compilers, Languages and Libraries

  • R (2 versions) with MPI and other libraries
  • Python (3 standard versions and 2 anaconda distributions)
  • C++ (2 versions)

Statistical Applications

  • Matlab 2015aSP1
  • Stata 14MP1, 14MP2 and 14MP24

General Utilities

  • StatTransfer v13

If you require other software, libraries or modules to be installed please request these, via email to fabian@lse.ac.uk and we install it on the cluster for all users to access.  

If you wish to you can either install any software or modules yourself into your home directory. If you wish to run multiple versions of softare yourself we will provide examples and assistance in doing so.

Investment and governance 

The School is providing significant investment in the Fabian environment and it is expected the service will grow as usage patterns and the requirement of the research community are better understood.

A result of the HPC project (sponsored by Professor Julia Black in November 2014), Fabian's overall aim is to provide a high quality central service for researchers to carry out compute-intensive tasks and the processing of large datasets with ease.

The HPC Fabian environment is overseen by a Steering Group which provides direction for the ongoing development and growth of the system.

The HPC Steering Group

The governing body for the Fabian service at LSE is the HPC Steering Group, which is made up of academics and professional services colleagues:

Kenneth Benoit

Head of the Department of Methodology

Christian Julliard

Associate Professor of Finance

David Coombe

Director of the Research Division

Jon Danielsson

Reader in Finance; Director, Systemic Risk Centre

Jose-Luis Fernandez

Deputy Director of the Personal Social Services Research Unit

Guy Michaels

Associate Professor of Economics

Nic Warner

IT Manager for the LSE Research Laboratory

John Harris

Head of Applications, IMT


High-Performance Computing Service Manager, IMT


High-Performance Computing Support Analyst, IMT

The terms of reference for the Streering Group are being drafted and will be available here once approved by the group.

If you have any questions about the Steering Group, please email fabian@lse.ac.uk

Cluster service principles 

The Fabian Cluster service has been designed to accommodate a range of user requirements.

A variety of different use-cases

  • Embarrassingly parallel / single-threaded jobs
  • SMP / multi-threaded, single-node jobs
  • MPI / parallel multi-node jobs

This is aimed at supporting the various researchers on campus current activities and allowing them to make increasing use of HPC as their research needs grow and they gain more experience of HPC.

Scalable architecture

  • Separate login node providing connections from the LSE networks.
  • Multiple compute nodes for different jobs (e.g. standard or high memory)
  • Dedicated storage nodes and connections to LSE storage.
  • An internal high speed network connecting the compute and storage nodes. 

The separation of tasks as described above allows each component to be tuned for its specific task delivering better performance to the user It also permits the system growth to reflect the overall capacity and capability requirements of users. If researchers have a specific requirement they can add their own hardware to the system taking advantage of the existing software and hardware resources within an integrated service.

Multiple storage tiers for user data

  • 500GB local scratch disk on every node
  • 30TB user home-directory space
  • Larger project based directory spaces

This allows users to store datasets in an optimial location while they are working with the data on Fabian.

Multiple User environments

  • A command line interface
  • A graphical/desktop interface

This is aimed at easing transition from the users working environment to/from the Fabian service allowing them to focus on the research challenges of analysing the larger datasets or simulating the more complex problems HPC enables.