Simple TLS Server
The code below is a complete implementation of a minimal TLS server. The first thing we do is initialise openssl in the init_openssl() function by loading the strings used for error messages, and setting up the algorithms needed for TLS. We then create an SSL_CTX or SSL context. This is created using the SSLv23_server_method which despite its name actually creates a server that will negotiate the highest version of SSL/TLS supported by the client it is connecting to. The context is then configured - we use SSL_CTX_set_ecdh_auto to tell openssl to handle selecting the right elliptic curves for us (this function isn't available in older versions of openssl which required this to be done manually). The final step of configuring the context is to specify the certificate and private key to use.
Next we perform some normal socket programming and create a new server socket, there's nothing openssl specific about this code. Whenever we get a new connection we call accept as normal. To handle the TLS we create a new SSL structure, this holds the information related to this particular connection. We use SSL_set_fd to tell openssl the file descriptor to use for the communication. In this example, we call SSL_accept to handle the server side of the TLS handshake, then use SSL_write() to send our message. Finally we clean up the various structures.
#include <stdio.h> #include <unistd.h> #include <sys/socket.h> #include <arpa/inet.h> #include <openssl/ssl.h> #include <openssl/err.h> int create_socket(int port) { int s; struct sockaddr_in addr; addr.sin_family = AF_INET; addr.sin_port = htons(port); addr.sin_addr.s_addr = htonl(INADDR_ANY); s = socket(AF_INET, SOCK_STREAM, 0); if (s < 0) { perror("Unable to create socket"); exit(EXIT_FAILURE); } if (bind(s, (struct sockaddr*)&addr, sizeof(addr)) < 0) { perror("Unable to bind"); exit(EXIT_FAILURE); } if (listen(s, 1) < 0) { perror("Unable to listen"); exit(EXIT_FAILURE); } return s; } void init_openssl() { SSL_load_error_strings(); OpenSSL_add_ssl_algorithms(); } void cleanup_openssl() { EVP_cleanup(); } SSL_CTX *create_context() { const SSL_METHOD *method; SSL_CTX *ctx; method = SSLv23_server_method(); ctx = SSL_CTX_new(method); if (!ctx) { perror("Unable to create SSL context"); ERR_print_errors_fp(stderr); exit(EXIT_FAILURE); } return ctx; } void configure_context(SSL_CTX *ctx) { SSL_CTX_set_ecdh_auto(ctx, 1); /* Set the key and cert */ if (SSL_CTX_use_certificate_file(ctx, "cert.pem", SSL_FILETYPE_PEM) <= 0) { ERR_print_errors_fp(stderr); exit(EXIT_FAILURE); } if (SSL_CTX_use_PrivateKey_file(ctx, "key.pem", SSL_FILETYPE_PEM) <= 0 ) { ERR_print_errors_fp(stderr); exit(EXIT_FAILURE); } } int main(int argc, char **argv) { int sock; SSL_CTX *ctx; init_openssl(); ctx = create_context(); configure_context(ctx); sock = create_socket(4433); /* Handle connections */ while(1) { struct sockaddr_in addr; uint len = sizeof(addr); SSL *ssl; const char reply[] = "test\n"; int client = accept(sock, (struct sockaddr*)&addr, &len); if (client < 0) { perror("Unable to accept"); exit(EXIT_FAILURE); } ssl = SSL_new(ctx); SSL_set_fd(ssl, client); if (SSL_accept(ssl) <= 0) { ERR_print_errors_fp(stderr); } else { SSL_write(ssl, reply, strlen(reply)); } SSL_free(ssl); close(client); } close(sock); SSL_CTX_free(ctx); cleanup_openssl(); }
Session Reuse
According to Viktor Dukhovni at Possible to control session reuse from the client:
> For performance testing purposes, I would like to turn off session > reuse in the (homegrown) client I use for testing. Is there a function > in the openssl library to do it? > > I tried googling for "openssl client don't send session id" but I didn't > find anything useful. Just do nothing. Client sessions are not reused unless you explicitly arrange for reuse of a session by calling SSL_set_session() before SSL_connect(). If you're trying to avoid wasting memory on storing client-side sessions that you'll never reuse then this may help: SSL_CTX_set_session_cache_mode(client_ctx, SSL_SESS_CACHE_OFF); but note this is also the default state, so is also not needed unless some other code has explicitly enabled client-side caching of sessions. Only the server-side cache is enabled by default.
0-RTT
0-RTT is specified in XXX (TODO). 0-RTT allows an application to immediately resume a previous session at the expense of consuming unauthenticated data. You should avoid 0-RTT if possible. In fact, an organization's data security policy may not allow it for some higher data sensitivity levels.
Care should be taken if enabling 0-RTT at the server because a number of protections must be enabled. Additionally, some of the protections are required higher up in the stack, outside of the secure socket layer. Below is a list of potential problems from Closing on 0-RTT on the IETF TLS working group mailing list.
- 0-RTT without stateful anti-replay allows for very high number of replays, breaking rate limiting systems, even high-performance ones, resulting in an opening for DDoS attacks.
- 0-RTT without stateful anti-replay allows for very high number of replays, allowing exploiting timing side channels for information leakage. Very few if any applications are engineered to mitigate or eliminate such side channels.
- 0-RTT without global anti-replay allows leaking information from the 0-RTT data via cache timing attacks. HTTP GET URLs sent to CDNs are especially vulnerable.
- 0-RTT without global anti-replay allows non-idempotent actions contained in 0-RTT data to be repeated potentially lots of times. Abuse of HTTP GET for non-idempotent actions is fairly common.
- 0-RTT allows easily reordering request with re-transmission from the client. This can lead to various unexpected application behavior if possibility of such reordering is not taken into account. "Eventually consistent" datastores are especially vulnerable.
- 0-RTT exporters are not safe for authentication unless the server does global anti-replay on 0-RTT.