You are here

GNUnet 0.9.0 released

Primary tabs

We are pleased to announce the release of GNUnet 0.9.0. This release is a major change of the architecture including a rewrite of most of the system. The new network is completely incompatible with the old 0.8.x network.

About GNUnet

GNUnet is a framework for secure peer-to-peer networking. GNUnet's primary design goals are to protect the privacy of its users and to guard itself against attacks or abuse. At this point, GNUnet offers two primary applications on top of the framework:

  • The file-sharing service allows anonymous censorship-resistant file-sharing. Files, searches and search results are encrypted to make it hard to control, track or censor users. GNUnet's anonymity protocol (gap) is designed to make it difficult to link users to their file-sharing activities. Users can also individually trade-off between performance and anonymity. Despite providing anonymity, GNUnet's excess-based economy rewards contributing users with better performance.
  • The VPN service allows offering of hidden services within GNUnet (using a .gnunet TLD) and can be used to tunnel IPv4 and IPv6 traffic over the P2P network. The VPN can also be used for IP protocol translation (6-to-4, 4-to-6) and it is possible to tunnel IP traffic over GNUnet (6-over-4, 4-over-6).

Other applications are still under development.

Key features of GNUnet include:

  • Works on GNU/Linux, FreeBSD, OS X and W32
  • P2P communication over TCP, UDP, HTTP (IPv4 or IPv6) or WLAN
  • Communication can be restricted to friends (F2F mode)
  • Includes a general-purpose, secure distributed hash table
  • NAT traversal using UPnP, ICMP or manual hole-punching (possibly in combination with DynDNS)
  • Small memory footprint (specifics depend on the configuration)

For developers, GNUnet offers:

  • Access to all subsystems via clean C APIs
  • Mostly written in C, but extensions possible in other languages
  • Multi-process architecture for fault-isolation between components
  • Use of event loop and processes instead of threads for ease of development
  • Extensive logging and statistics facilities
  • Integrated testing library for automatic deployment of large-scale experiments with tens of thousands of peers

Noteworthy improvements in 0.9.0

  • New architecture: multi-process architecture with ARM supervisor
  • New application: VPN
  • New configuration tool: gnunet-setup (part of gnunet-gtk), including automated correctness tests for network and database configuration
  • New service: mesh routing
  • New transports: HTTPS and WLAN
  • New peer discovery in the LAN via broadcast (IPv4) and multicast (IPv6)
  • New datastore table and index structure significantly improves various database operations
  • Improved connectivity using UPnP and ICMP-based NAT traversal
  • Event-driven execution results in significant performance improvements in many areas
  • Power-publishing for file-sharing: user specifies replication priorities to improve content replication for certain files in the network

Availability

The GNUnet 0.9.0 source code is available from all GNU FTP mirrors. The GTK frontends (which includes the gnunet-setup tool) are a separate download.

All known releases
https://gnunet.org/current-downloads
GNUnet on a FTP mirror near you
http://ftpmirror.gnu.org/gnunet/gnunet-0.9.0.tar.gz
GNUnet GTK on an FTP mirror near you
http://ftpmirror.gnu.org/gnunet/gnunet-gtk-0.9.0.tar.gz
GNUnet on the primary GNU FTP server
ftp://ftp.gnu.org/pub/gnu/gnunet/gnunet-0.9.0.tar.gz
GNUnet GTK on the primary GNU FTP server
ftp://ftp.gnu.org/pub/gnu/gnunet/gnunet-gtk-0.9.0.tar.gz

Note that GNUnet is now started using "gnunet-arm -s"; the old gnunetd binary is no more. GNUnet should be stopped using "gnunet-arm -e".

Further Information

GNUnet Homepage
https://gnunet.org/
GNUnet Forum
https://gnunet.org/forum
GNUnet Bug tracker
https://gnunet.org/bugs/
IRC
irc://irc.freenode.net/#gnunet

History

Development of the GNUnet 0.9.0 branch began in January 2009 with a larger review of the various architectural issues in GNUnet 0.8.x. For those interested in the rationale behind the major changes in this version, the following text reproduces the key problems that were identified and the solutions that were adopted.

First, it should be noted that the redesign does not change anything fundamental about the application-level protocols or how files are encoded and shared. However, it is not protocol-compatible due to other changes that do not relate to the essence of the application protocols. This choice was made since productive development and readable code were considered more important than compatibility at this point.

The redesign tries to address the following major problem groups describing issues that apply more or less to all GNUnet versions prior to 0.9.x:

PROBLEM GROUP 1: scalability

Assessment:

  • The (0.8.0) code was modular, but bugs were not. Memory corruption in one plugin could cause crashes in others and it was not always easy to identify the culprit. This approach fundamentally does not scale (in the sense of GNUnet being
    a framework and a GNUnet server running hundreds of different application protocols -- and the result still being debuggable, secure and stable).
  • The code was heavily multi-threaded resulting in complex locking operations. GNUnet 0.8.x had over 70 different mutexes and almost 1000 lines of lock/unlock operations. It is challenging for even good programmers to program or maintain good multi-threaded code with this complexity.The excessive locking essentially prevents GNUnet 0.8 from actually doing much in parallel on multicores.
  • Despite efforts like Freeway, it was virtually impossible to contribute code to GNUnet 0.8 that was not written in C/C++.
  • Changes to the configuration almost always required restarts of gnunetd; the existence of change-notifications does not really change that (how many users are even aware of SIGHUP, and how few options worked with that -- and at what expense in code complexity!).
  • Valgrinding could only be done for the entire gnunetd process. Given that gnunetd does quite a bit of CPU-intensive cryptography, this could not be done for a system
    under heavy (or even moderate) load.
  • Stack overflows with threads, while rare under Linux these days, result in really nasty and hard-to-find crashes.
  • structs of function pointers in service APIs were needlessly adding complexity, especially since in most cases there was no actual polymorphism

Adopted solutions:

  • Use multiple, loosely-coupled processes and one big select loop in each (supported by a powerful util library to eliminate code duplication for each process).
  • Eliminate all threads, manage the processes with a master-process (gnunet-arm, for automatic restart manager) which also ensures that configuration changes trigger the necessary restarts (the latter is not yet implemented)
  • Use continuations (with timeouts) as a way to unify cron-jobs and other event-based code (such as waiting on network IO). This way, using multiple processes ensures that memory corruption stays localized. Using multiple processes will also make it easy to contribute services written in other language(s).
    Individual services can now be subjected to valgrind and process priorities can be used to schedule the CPU better. Note that we can not just use one process with a big select loop because we have blocking operations (and the blocking is outside of our control, thanks to MySQL, sqlite, gethostbyaddr, etc.). So in order to perform
    reasonably well, we need some construct for parallel execution, which leads to the following rule: If your service contains blocking functions, it MUST be a process by itself. If your service is sufficiently complex, you MAY choose to make it a separate process.
  • Eliminate structs with function pointers for service APIs; instead, provide a library (still ending in _service.h) API that transmits the requests nicely to the respective process (easier to use, no need to "request" service in the first place; API can cause process to be started/stopped on-demand via ARM if necessary).

PROBLEM GROUP 2: certain APIs often cause bugs

Assessment:

  • The existing logging functions were awkward to use and their expressive power was never really used for much.
  • While we had some rules for naming functions, there were still plenty of inconsistencies.
  • Specification of default values in configuration could result in inconsistencies between defaults in config.scm and defaults used by the program; also, different defaults might have been specified for the same option in different parts of the program.
  • The time API did not distinguish between absolute and relative time, requiring users to know which type of value some variable contained and to manually convert properly. Combined with the possibility of integer overflows this was a major source of bugs.
  • The time API for seconds has a theoretical problem with a 32-bit overflow on some platforms which is only partially fixed by the old code with some hackery.

Adopted solutions:

  • Logging was radically simplified (at least the API).
  • Functions are now more consistently named.
  • Configuration has no more defaults; instead, we load global default configuration files before the user-specific configuration (which can be used to override defaults). config.scm is gone.
  • Time now distinguishes between "struct GNUNET_TIME_Absolute" and "struct GNUNET_TIME_Relative". We use structs so that the compiler won't coerce for us
    (forcing the use of specific conversion functions which have checks for overflows, etc.). Naturally the need to use these functions makes the code a bit more verbose, but that's a good thing given the potential for subtle bugs.
  • There is no more TIME API function to do anything with 32-bit seconds
  • There is now a bandwidth API to handle non-trivial bandwidth utilization calculations

PROBLEM GROUP 3: statistics

Assessment:

  • Databases and others needed to store capacity values similar to what stats was already doing, but across process lifetimes ("state"-API was a partial solution for that, but using it was clunky)
  • Only gnunetd could use statistics, but other processes in the GNUnet system might have had good uses for it as well

Adopted solutions:

  • New statistics library and service that offer an API to inspect and modify statistics
  • Statistics are distinguished by service name in addition to the name of the value
  • Statistics can be marked as persistent, in which case they are written to disk when the statistics service shuts down. As a result, we have one shared solution for existing stats uses, application stats, database stats and versioning information.

PROBLEM GROUP 4: testing

Assessment:

  • he existing structure of the code with modules stored in places far away from the test code resulted in tools like lcov not giving good results.
  • The codebase had evolved into a complex, deeply nested hierarchy often with directories that then only contained a single file. Some of these files had the same name making it hard to find the source corresponding to a crash based on the reported filename/line information.
  • Non-trivial portions of the code lacked good testcases, and it was not always obvious which parts of the code were not well-tested.

Adopted solution:

  • Code that should be tested together is now in the same directory.
  • The hierarchy is now essentially flat, each major service having on directory under src/;
  • naming conventions help to make sure that files have globally-unique names
  • All code added to the new repository should come with testcases with reasonable coverage.

PROBLEM GROUP 5: core/transports

Assessment:

  • The DV service requires session key exchange between DV-neighbours, but the existing session key code could be used to achieve this.
  • The core requires certain services (such as identity, pingpong, fragmentation, transport, traffic, session) which makes it meaningless to have these as modules
    (especially since there is really only one way to implement these)
  • HELLO's are larger than necessary since we need one for each transport (and hence often have to pick a subset of our HELLOs to transmit)
  • Fragmentation is done at the core level but only required for a few transports; future versions of these transports might want to be aware of fragments and do things like retransmission
  • Autoconfiguration is hard since we have no good way to detect (and then use securely) our external IP address
  • It is currently not possible for multiple transports between the same pair of peers to be used concurrently in the same direction(s)
  • We're using lots of cron-based jobs to periodically try (and fail) to build and transmit

Adopted solutions:

  • Rewrite core to integrate most of these services into one "core" service.
  • Redesign HELLO to contain the addresses for all enabled transports in one message (avoiding having to transmit the public key and signature many, many times)
  • With discovery being part of the transport service, it is now also possible to "learn" our external IP address from other peers (we just add plausible addresses to the list; other peers will discard those addresses that don't work for them!)
  • New DV consists of a "transport" and a high-level service (to handle encrypted DV control- and data-messages).
  • Move expiration from one field per HELLO to one per address
  • Require signature in PONG, not in HELLO (and confirm on address at a time)
  • Move fragmentation into helper library linked against by UDP (and others that might need it)
  • Link-to-link advertising of our HELLO is transport responsibility; global advertising/bootstrap remains responsibility of higher layers
  • Change APIs to be event-based (transports pull for transmission data instead of core pushing and failing)

PROBLEM GROUP 6: File-sharing APIs

Assessment:

  • As with gnunetd, the FS-APIs are heavily threaded, resulting in hard-to-understand code (slightly better than gnunetd, but not much).
  • GTK in particular does not like this, resulting in complicated code to switch to the GTK event thread when needed (which may still be causing problems on Gnome, not sure).
  • If GUIs die (or are not properly shutdown), state of current transactions is lost (FSUI only saves to disk on shutdown)
  • FILENAME meta data is killed by ECRS/FSUI to avoid exposing HOME, but what if the user set it manually?
  • The DHT was a generic data structure with no support for ECRS-style block validation

Adopted solutions:

  • Eliminate threads from FS-APIs
  • Incrementally store FS-state always also on disk using many small files instead of one big file
  • Have API to manipulate sharing tree before upload; have auto-construction modify FILENAME but allow user-modifications afterwards
  • DHT API was extended with a BLOCK API for content validation by block type; validators for FS and DHT block types were written; BLOCK API is also used by gap routing code.

PROBLEM GROUP 7: user experience

Assessment:

  • Searches often do not return a sufficient / significant number of results
  • Sharing a directory with thousands of similar files (image/jpeg) creates thousands of search results for the mime-type keyword (problem with DB performance, network transmission, caching, end-user display, etc.)
  • Users that wanted to share important content had no way to tell the system to replicate it more; replication was also inefficient (this desired feature was sometimes called "power" publishing or content pushing)

Adopted solutions:

  • Have option to canonicalize keywords (see suggestion on mailinglist end of
    June 2009: keep consonants and sort those alphabetically)
  • When sharing directories, extract keywords first and then push keywords that are common in all files up to the directory level. In the future, when processing an AND-ed query and a directory is found to match the result, do an inspection on the meta data of the files in the directory to possibly produce further results (requires downloading of the directory in the background) [the last part is not yet implemented]
  • A desired replication level can now be specified and is tracked in the datastore; migration prefers content with a high replication level (which decreases as replicase are created). As a result, the datastore format changed; we also took out a size field
    that was redundant and added a random number for faster random access.
  • Peers with a full disk (or disabled migration) can now notify other peers that they are not interested in migration right now; as a result, less bandwidth is wasted pushing content to these peers (and replication counters are not generally decreased based on copies that are just discarded; naturally, there is still no guarantee that the replicas will stay available)

SUMMARY

Features eliminated from util:

  • threading
  • complex logging features [ectx-passing, target-kinds]
  • complex configuration features [defaults, notifications]
  • network traffic monitors
  • IPC semaphores
  • second timers

New features in util:

  • scheduler (event loop)
  • service and program boot-strap code
  • bandwidth and time APIs
  • buffered IO API
  • HKDF implementation (crypto)
  • load calculation API
  • bandwidth calculation API