NNCP: treatment of online and censored addictions using the store-and-forward method

 

NNCP: treatment of online and censored addictions using the store-and-forward method

Hornbeam
35 min
June 11, 2017
This article raises the question of the depressing situation with the availability of data on the Internet, the abuse of censorship and total surveillance. Are the governments or corporations to blame? What to do? Create your own social networks, participate in anonymization networks, build mesh networks and store-and-forward solutions. Demonstration of NNCP utilities for creating these store-and-forward friend-to-friend solutions.

Who is guilty?


There have been a lot of discussions in RuNet lately about the fear of draft laws that could put the existence of our part of the Internet in general into question. The interpretation of these bills may be such that everything where encryption is used will be illegal, prohibited, and who needs a network where it is impossible not to transfer data publicly, to talk privately?

Authorities say that there is no question of a ban, but only of control: that is, if they can read, then no problem - but we know that there is no encryption “transparent” for some, but reliable from others. Such “control” is tantamount to the fact that any of our words should be transmitted through an intermediary, direct communication between two interlocutors is unacceptable. In addition, the presence of a centralized intermediary is dangerous because of the abuse of censorship and the denial of access to layers of information. Plus, this is a colossal flow of private information about people (each of their clicks will be recorded) - that is, global surveillance with all the ensuing problems.

All these problems are presented in an opposition form against the authorities, they say that they are to blame for the fact that we, ordinary people, can lose such a miracle of the world as the Network of Networks. Is everything so bad and is it really that we, with our technologies, “does not like” the authorities?

It's not that bad - it's much worse. Because the availability of information, global total surveillance and censorship have long since become a de facto, without any adopted bills. But the instigators of all this were and are corporations like Google, Facebook, Microsoft and Apple.

It's no secret that Web technologies are extremely complex, cumbersome, and time consuming to develop: try writing a Web browser from scratch with all the CSS and DOM JavaScript. It is known that actively developed browsers can be counted on the fingers and people from these corporations are engaged in their development. Therefore, any development moves exclusively and logically towards the needs of corporations.

What was the Web before, from a technological point of view? Users have a special program (the good old warm tube Web browser) that, using a standardized HTTP protocol (saying what document, what resource to get), connects to servers, receives an HTML (possibly plus images) document and displays it. This is a distributed storage network for documents in which there is one protocol. We receive ready-made documents over the network, which can be stored on a hard drive and read without further connections to servers.

How does the Web "corporations" work? The user has a special program (still called a Web browser), which receives a program written in JavaScript ( although now it can be some kind of WebAssembly, which is a regular binary executable code, similar to an .exe file), launches it in a virtual machine and this program, according to its own protocol (that is, the rules for interacting with the server and the message format), starts communicating with the server in order to get the data and display it on the screen. Saving the displayed data, thinking that this is a document, is unlikely to succeed. It will also not work to automate the receipt of documents, because each individual site is a separate program and its own communication protocol with its own message format (JSON request structure, for example). Now it's a distributed networkapplications downloaded to user computers.

Of course, all these programs are closed, since the code is at least obfuscated and not suitable for people to read or edit. Previously, we once installed a program for ourselves that implements one given protocol and supports at least one standardized document format. Now, every time, with every site, we download another different program.

What is a closed proprietary program? This is when you don't control your computer, when you don't know what the program on it is going to do. You don't tell your machine what to do, but the program tells you what you are allowed to do. Of course, all this applies to any other proprietary program, not just automatically downloaded JS code. However, a significant difference between Microsoft Windows installed with some Microsoft Word on your computer and JS code is that you install them once and if you don’t notice anything dangerous or alarming in their work, then just don’t worry and trust. However, in the Web world, every time you visit a site, you can get a new version of the program and any modern browser will not even tell you this. If before you did not notice that the site sends private data to the server, then, after visiting it in five minutes, he can start. No one, without special plugins and dancing with a tambourine, will tell you about this and warn you that you are using a different version of the downloaded program. The ecosystem is geared towards unquestioning downloading of non-free software. The owners of such sites can force them to do whatever they want on user computers by literally changing a few files on their servers and the new version of the program will automatically be executed.

Maybe the problems are exaggerated, because this is not an ordinary .exe program that has access to a huge amount of computer resources, but a program that runs, in theory, in an isolated virtual machine? Unfortunately, the size and complexity of the codebase of modern browsers is so huge that even just analyzing their security is very expensive, not to mention the fact that this codebase changes so quickly that any analysis at the time of its completion will not be relevant. Complexity is the main enemy of any security system. Sophisticated protocols like TLS have proven that even if hundreds of millions of people use and develop under OpenSSL, which is free and open source software, there can be fatal critical bugs. Plus, we've all seen Row Hammer attacks.can also be done from a browser. In addition, there are successful attacks on the processor cache aimed at recognizing the AES key, also performed from the browser. A virtual machine that changes so rapidly and with such complexity cannot be safe by definition, because even full virtualization in some Xen or KVM does not help against some attacks. And what's the point of making a good isolated environment when the business of corporations is, on the contrary, collecting as much data as possible?

Now let's try to disable JavaScript in our browser and walk around a variety of modern sites. Today, some of the resources will not work at all, but on the remaining 99% of the sites we will see that a huge amount of advertising has gone somewhere. We will see that orders of magnitude fewer requests have begun to come from us, giving out our private data to third-party sites / servers - that is, surveillance has been significantly reduced, at least by the lack of contact with numerous third parties.

All this is done, as officially reported, for the sake of advertising, for the sake of targeted advertising, for the sake of its improvement and for our sake. All this is only improving solely due to surveillance of us. Renowned security expert and cryptographer Bruce Schneier repeatedly emphasizes that the business model of the Internet is to spy on users. All these corporations live due to the fact that they follow users (I emphasize that surveillance means collecting data about them) and sell the information received.

Someone may object: what kind of spying on me in the store, if I came in such and such a coat and my face is visible - I myself gave out this information. Indeed, I send the IP address, TCP ports, User-Agent of my browser myself and cannot but send it - that's how the Web works. But if the seller starts asking what my name is, where I'm from, following me on my heels - this is already a request for information that is not needed to complete a sale and purchase transaction, this is already surveillance. But corporate websites are doing just that, destroying access to information using standardized protocols (which say so little about us) and document formats - if they force you to use their software, then they are free to follow it as they please.

Interview the majority of people who specifically were really affected by the blocking of Roskomnadzor and they lost the availability of some information? Apart from loud short-term bans like Github, the most people will say is the loss of Rutracker. However, as in the case of Pirate Bay, it should be understood that this is no longer a whim of the authorities and not politics, but the power of corporations like Hollywood and the like. Their financial fortunes and influence on the power of countries are quite significant. It makes no sense for the authorities themselves to close Rutracker or Pirate Bay, since, basically, these are cheap (in terms of infrastructure) entertainment distracting people from politics (potentially creating a danger to the authorities).

But the loss of tons of information due to the fact that the site stopped working with simple HTTP + HTML methods, forcing people to use its software, forcing them to be constantly online (if a person is not online, then how to collect information about him?) - in my opinion , have affected and are affecting more and more strongly. Disconnect a person from the Internet and he is not able to do anything at all, not even read the mail or look at his photos or remember the meeting, because all this is left in the clouds.

Information that “fell” into a social network like VKontakte is not available for indexing by third-party robots, and is often not available to unauthorized persons. Like, only if I allow downloading closed proprietary programs, register by providing the identification data of my “beacon” (cell phone), then only then will I be able to see a couple of paragraphs of text about some regular music concert. A wild number of Web developers have simply forgotten how to make sites differently - without surveillance and installing their own software for each user, they will not show anything, not a bit of payload information. Because corporations only educate people in this unethical and disrespectful development method. To see at least one message in Google Groups, you need to download almost two megabytes of JS programs - comments are superfluous.

Thus, total surveillance, inaccessibility of information, centralized censorship - all this has already happened, all this is cultivated by the developers themselves, ordinary people. Social networks are “clean”: they say, no one is forcing us to post all this. Indeed, people are extremely easy to manipulate and extremely easy to remain silent about what they are losing, showing only the positive aspects of their approaches. And the value begins to be understood only with the loss.

There are many people who use almost exclusively VKontakte and YouTube: their every action is alreadymonitored, all their correspondence (they don't have an email, just a VK or Telegram account) is read, all incoming information is trivially censored (how many times has Facebook been seen manipulating people, by censoring data?). And there may already be a majority of them, but no one forced them to do so, and there is still a choice. For them, providers already have special cheaper tariff plans in which access is only to a number of services. When the mass of these people becomes absolutely critical, then only such tariff plans will remain, since what is the point of supporting the provider with infrastructure providing access to the entire Internet, when it is enough to have peering with half a dozen corporate networks and 99.99% of users are satisfied? Prices for full-fledged tariffs will increase (if they remain at all) and this will already become a barrier to Internet accessibility.

Are people worried that a CA certificate will be forced in a TLS connection, as happened in Kazakhstan, for example, in order to monitor, listen and censor? But, at the same time, these same people do not care that other parties (corporate services) generally install their proprietary software and their protocols? These same people, by posting information only on social networks, once again support the centralization and hegemony of one corporation over all data. They have been digging their own Internet graves for a long time, trying to blame all the troubles on the "default" extreme.

To corporations with outstretched arms, and the authorities, which do much less catastrophic things, find fault.

What to do?


The few people who really need the Internet, who need the ability to, roughly speaking, send arbitrary data from one arbitrary computer to another?

If you don’t like the services of corporations or social networks, then no one forces you to use them and you can always make your own analogue (with blackjack and whores, optionally). Every home has a powerful computer, a fast network and all the possibilities of technical implementation. Social media engines like Diaspora or GNU Social have been around for a long time.

If you don’t like how data is provided, doing it with a huge threshold of entry (its own protocol and format), then at least do it yourself in a proper and satisfactory way. This applies to developers.

If the resource does not have enough hard drives, a channel to meet all needs, then do not forget about cooperation and offer the possibility of mirroring the resource. Instead, unfortunately, many are moving to CDNs like Cloudflare, which often bans login from the Tor network, forcing them to go through humiliating deanonymization procedures.

If you do not like that more and more providers do not give static IP addresses, do not give full addresses at all, but only an internal address behind NAT, then placing resources inside overlay networks like Tor hidden service (.onion) or I2P ( .i2p) may be the only way to connect to you from outside. You must remember to participate in such networks and donate the often unused resources of your computers. Develop and maintain not only low-latency networks, a priori subject to a number of attacks due to their nature, but also networks like Freenet and GNUnet . To not be as always: until the thunder breaks out.

If the censorship supported by corporations really reaches a point where arbitrary computers cannot exchange encrypted traffic with each other, that is, the Internet closes and only a whitelist of remote access to a dozen services remains, then you can create your own network.

Laying fiber optics or cables can hardly be considered, as it costs a lot of money (forget that this is not allowed yet). But mesh networks over wireless communication channels can also be implemented in Spartan home conditions. An option may not necessarily be the creation of a completely isolated network, but one in which at least someone will have access to the working Internet, being a gateway. There are enough projects to choose from.

But do not forget about the desire of corporations to ban firmware changes on WiFi routers and do not forget that a huge number of WiFi modules do not work without binary blobs. Such a vendor-lockin does not exclude strict control over traffic, as, for example, in modern proprietary operating systems it is impossible to install programs that are not cryptographically signed by corporations (which approved the launch of this software). Making a WiFi chip on your own at home is very expensive and it is possible that those that could make a mesh network will also disappear from the market. This is not only about WiFi, but also any other wireless solutions with a thick communication channel. Amateur radio stations can be made at home, but the capacity of their channels is deplorable, and you can’t just pick up and raise such a station at home - you need permissions.

It is possible to create a mesh network in theory, but in practice it takes a large number of fairly geographically distributed people to create something of impressive size and having practical benefits, not just academic ones. There are different opinions, but my personal experience shows that people are not particularly eager to cooperate, and therefore there is no hope that a mesh network, at least in Moscow, could be created. And people really need a lot, because WiFi (or other available capacious radio solutions) works over relatively short distances.

In addition, mesh networks and their protocols are well-tailored exclusively for real-time connections. They are designed to be able to open sites in real time or remotely use the terminal. If connectivity in the network is lost, then this is tantamount to a cable break and, until restoration, another section of the network will be unavailable, rendering real-time low-delay programs unusable. It is also necessary to provide good channel redundancy - which is expensive and resource-intensive.

A more radical solution is to forget about real-time services and remember that life is quite possible without them. Is it so critical that your message does not arrive in a second, but perhaps in a few minutes or hours? E-mail remains the most reliable and widespread method of communication and it does not guarantee any delivery timeframes: delays of tens of minutes are standard.

To read most sites, real-time is not needed in principle. The site can be downloaded, available in all free operating systems, with programs like GNU Wget and send a mirror of the site to yourself. There is also a standard for storing Web data: WARC (Web ARChive), used by the Internet Archive . One file can contain the entire Web site. The same approach is used in the Freenet network: sites are also archived so as not to download hundreds of blocks of data through the network, which would probably have been requested, but to get everything atomically right away. I emphasize: the format of Web archives is alreadystandardized and dozens of petabytes of the Internet Archive are made in it. Valuable pages should be saved simply by means of a Web browser on disk, because today the link is working, and tomorrow it’s easily gone. You may have learned the information on it, but you can no longer give it to your friend. This is lost information, especially in the context of centralized systems such as CDN and social networks.

If you refuse real-time and the mandatory momentary online mode of operation, then even mesh networks are not needed, but much less demanding store-and-forward (save and send further) solutions are sufficient. One such network that still exists today is FidoNet , which, before the spread of cheap Internet, was quite a widespread (among that same minority) global network. Yes, the messages went on for hours or whole days, but, believe me, there was easily much more interesting communication in Fido than you can now find among an incredible number of Internet forums and mailing lists.

Node-to-Node CoPy


I am in no way calling for the resurrection of FidoNet or UUCP (Unix-to-Unix CoPy), the use of which I have already written about before !

Firstly, all these systems were created at a time when cryptography was not actually available to mere mortals and when communication channels were hardly eavesdropped, and if they were eavesdropped, then only purposefully, and not massively (purely technically it is now easier to eavesdrop on everyone than specific individuals). Back then, there was no business based on spying on people. They did not provide encryption or strong authentication.

Secondly, the same FidoNet was created for simple personal computers with DOS, unlike UUCP, the former de facto method of communication between Unix systems. Now almost any free OS is a Unix-like world. Not many people want to give up the usual email client and the usual programs for communicating with the "outside world". FidoNet with UUCP are completely different ecosystems.

Ideally, one would like to have something that is as transparent as possible for email and file transfers. Along with this, it had modern cryptographic security. This can be achieved with UUCP, but by screwing on additional wrappers for encryption and authentication. In addition, neither FidoNet nor UUCP are designed out of the box to work through removable storage media. Yes, in FidoNet you can copy outgoing packets to a floppy disk and copy them back on the target node, but this is a manual action, easily obtained due to the simplicity of the system, but not provided by default (there are no commands “here is a floppy disk, I want to go to Vovan”, “ here I am with a floppy disk from Vovan, who also visited Vaska, act”).

To satisfy this desire, a set of free (GNU GPLv3 +) programs NNCP was created They are geared towards organizing a modern small store-and-forward network with as little human cost as possible.

Let's look at how to use all this with specific examples.

We, Alice (let's take names from the world of cryptography) and Bob, have installed NNCP for ourselves and we have a bunch of nncp-* commands. First, let's create (using the nncp-cfgnew command ) our own cryptographic key pairs:

alice% nncp-cfgnew | tee alice.yaml
self:
  id: ZY3VTECZP3T5W6MTD627H472RELRHNBTFEWQCPEGAIRLTHFDZARQ
  exchpub: F73FW5FKURRA6V5LOWXABWMHLSRPUO5YW42L2I2K7EDH7SWRDAWQ
  exchprv: 3URFZQXMZQD6IMCSAZXFI4YFTSYZMKQKGIVJIY7MGHV3WKZXMQ7Q
  signpub: D67UXCU3FJOZG7KVX5P23TEAMT5XUUUME24G7DSDCKRAKSBCGIVQ
  signprv: TEXUCVA4T6PGWS73TKRLKF5GILPTPIU4OHCMEXJQYEUCYLZVR7KB7P2LRKNSUXMTPVK36X5NZSAGJ632KKGCNODPRZBRFIQFJARDEKY
  noiseprv: 7AHI3X5KI7BE3J74BW4BSLFW5ZDEPASPTDLRI6XRTYSHEFZPGVAQ
  noisepub: 56NKDPWRQ26XT5VZKCJBI5PZQBLMH4FAMYAYE5ZHQCQFCKTQ5NKA
neigh:
  self:
    id: ZY3VTECZP3T5W6MTD627H472RELRHNBTFEWQCPEGAIRLTHFDZARQ
    exchpub: F73FW5FKURRA6V5LOWXABWMHLSRPUO5YW42L2I2K7EDH7SWRDAWQ
    signpub: D67UXCU3FJOZG7KVX5P23TEAMT5XUUUME24G7DSDCKRAKSBCGIVQ
    noisepub: 56NKDPWRQ26XT5VZKCJBI5PZQBLMH4FAMYAYE5ZHQCQFCKTQ5NKA
    sendmail:
    - /usr/sbin/sendmail
spool: /var/spool/nncp/alice
log: /var/spool/nncp/alice/log

NNCP creates exclusively Friend-to-Friend (F2F) networks, where each participant knows about the neighbors with whom he communicates. If Alice needs to contact Bob, then, first, they must exchange their public keys and write them into the configuration files. The nodes exchange so-called encrypted packets with each other - a kind of analogue of OpenPGP. Each packet is explicitly addressed to a given member. This is not Peer-to-Peer (P2P), where anyone can connect to anyone and send something: this creates the possibility of Sybil attacks , when attacker nodes can disable the entire network, or at least monitor the activity of participants in it.

The simplest configuration file contains the following fields:

  • self.id - ID of our node
  • self.exchpub/self.exchprv and self.signpub/self.signprv are keys used to create encrypted packets
  • self.noisepub/self.noiseprv - optional keys used when communicating with nodes over a TCP connection
  • neigh - contains information about all known network members, "neighbors". It always contains a self entry - it contains your public data and it is it that can be safely distributed to people, since it contains only public parts of the keys
  • spool - path to the spool directory where outgoing encrypted packets and unprocessed incoming packets are located
  • log - the path to the log in which all performed actions are saved (sent file / letter, received, etc.)

Bob generates his file. Exchanges his key with Alice and adds it to her configuration file (Alice does the same):
alice% cat bob.yaml
self:
  id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ
  exchpub: GQ5UPYX44T7YK5EJX7R73652J5J7UPOKCGLKYNLJTI4EBSNX4M2Q
  exchprv: HXDO6IG275S7JNXFDRGX6ZSHHBBN4I7DQ3UGLOZKDY7LIBU65LPA
  signpub: 654X6MKHHSVOK3KAQJBR6MG5U22JFLTPP4SXWDPCL6TLRANRJWQA
  signprv: TT2F5TIWJIQYCXUBC2F2A5KKND5LDGIHDQ3P2P3HTZUNDVAH7QUPO6L7GFDTZKXFNVAIEQY7GDO2NNESVZXX6JL3BXRF7JVYQGYU3IA
  noiseprv: NKMWTKQVUMS3M45R3XHGCZIWOWH2FOZF6SJJMZ3M7YYQZBYPMG7A
  noisepub: M5V35L5HOFXH5FCRRV24ZDGBVVHMAT3S63AGPULND4FR2GIPPFJA
neigh:
  self:
    id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ
    exchpub: GQ5UPYX44T7YK5EJX7R73652J5J7UPOKCGLKYNLJTI4EBSNX4M2Q
    signpub: 654X6MKHHSVOK3KAQJBR6MG5U22JFLTPP4SXWDPCL6TLRANRJWQA
    noisepub: M5V35L5HOFXH5FCRRV24ZDGBVVHMAT3S63AGPULND4FR2GIPPFJA
    sendmail:
    - /usr/sbin/sendmail
  alice:
    id: ZY3VTECZP3T5W6MTD627H472RELRHNBTFEWQCPEGAIRLTHFDZARQ
    exchpub: F73FW5FKURRA6V5LOWXABWMHLSRPUO5YW42L2I2K7EDH7SWRDAWQ
    signpub: D67UXCU3FJOZG7KVX5P23TEAMT5XUUUME24G7DSDCKRAKSBCGIVQ
    noisepub: 56NKDPWRQ26XT5VZKCJBI5PZQBLMH4FAMYAYE5ZHQCQFCKTQ5NKA
spool: /var/spool/nncp/bob
log: /var/spool/nncp/bob/log

Next, Bob wants to send a file to Alice:
and also a backup of his file system, but doing it through Unix-way pipelines:
bob% export NNCPCFG=/path/to/bob.yaml
bob% zfs send zroot@backup | xz -0 | nncp-file - alice:bobnode-$(date "+%Y%m%d").zfs.xz
2017-06-11T15:44:20Z File - (1.1 GiB) transfer to alice:bobnode-20170611.zfs.xz: sent

Then he can look at what's cluttered up in his spool directory :
bob% nncp-stat
self
alice
        nice: 196 | Rx:        0 B,   0 pkts | Tx:    1.1 GiB,   2 pkts

Each packet, in addition to information about the sender and recipient, also contains the so-called nice level. It's just a one-byte number - a priority. Almost all actions can be accompanied by a restriction on the maximum allowable nice level. This is the equivalent of grade in UUCP. Used to process packets with a higher priority (lower nice value) first, so that mail messages get through despite the fact that a DVD movie is trying to be transferred in the background. By default, priority 196 is set for sending files, which we see. Rx are received packets not yet processed, and Tx are packets to be transmitted.

Packets are encrypted and their integrity is guaranteed to be verified. The packages are also authenticated - it is reliably known who they are from. In almost all commands, you can specify the minimum required package size - garbage will be automatically added to them, complementing it to the desired size. At the same time, the real size of the payload is hidden from an outside observer, encrypted.

When transferring files, you can specify the -chunked option, indicating to it the size of the piece by which the file should be beaten. It uses a scheme very reminiscent of BitTorrent: the file is broken into pieces and a meta-file is added containing information about each piece, for guaranteed, from the point of view of integrity, file recovery. This can be useful if you need to hide the size of huge files (as with a corpse - it is problematic to drag it in one piece, but dividing it into six parts is much easier). It is also useful when you need to transfer large amounts of data through drives of obviously smaller sizes: then the data transfer will be carried out in several iterations.

Now it must somehow be conveyed to Alice. One way is through a data drive, through copying files on the file system.

Bob takes a USB flash drive, creates a file system on it, runs:

bob% nncp-xfer -mkdir /mnt/media
2017-06-11T18:23:28Z Packet transfer, sent to node alice (1.1 GiB)
2017-06-11T18:23:28Z Packet transfer, sent to node alice (350 KiB)

and receives there a set of directories with all outgoing packets for each node known to him. Which nodes to “consider” can be limited by the -node option. The -mkdir option is required only for the first run - if the directories for the corresponding nodes are already on the drive, they will be processed, otherwise the nodes will simply be skipped. This is convenient because if the flash drive "walks" only between some members of the "network", then only packages for them will be placed on the drive, without the need to constantly specify -node.

Instead of a USB drive, a temporary directory can be used from which an ISO image will be created for burning to a CD. It could be some public FTP/NFS/SMB server mounted in the /mnt/media directory. Such a NAS can be placed at work or somewhere else - as long as two contacting participants at least occasionally have the opportunity to connect to it. It can be a portable PirateBox that collects and distributes NNCP packets along the way. This can be a USB dead drop , to which completely different and unfamiliar people connect from time to time: if there are unknown nodes in the target directory, then we ignore them and can only find out about the fact of their existence and the number of transmitted packets.

All these drives and storages contain only encrypted packets. It can be seen from whom and to whom they are intended, how much, what size and priority. But not more. Without private keys, you can't even find out the type of packet (whether it's a mail message, a file, or a transit packet). NNCP does not attempt to be anonymous.

Using nncp-xfer does not require any private keys: only knowledge about neighbors is needed - their identifiers. Thus, you can go on the road with a spool directory and a minimal configuration file (without private keys) prepared by the nncp-cfgmin command without fear of compromised keys. An nncp-toss call requiring private keys can be made at any other convenient time. However, if you are still afraid for the security of private keys or even a configuration file where all your neighbors are listed, then you can use the nncp-cfgenc utility , which allows you to encrypt it. The passphrase entered from the keyboard is used as the encryption key: there is a salt in the encrypted file, the password is strengthened by the CPU and memory hard algorithmBalloon , so with a good passphrase, you don't have to worry too much about being compromised (except with a soldering iron).

Alice just needs to execute the command to copy the files intended for her to her spool directory:

alice% nncp-xfer /mnt/media
2017-06-11T18:41:29Z Packet transfer, received from node bob (1.1 GiB)
2017-06-11T18:41:29Z Packet transfer, received from node bob (350 KiB)
alice% nncp-stat
self
bob
        nice: 196 | Rx:    1.1 GiB,   2 pkts | Tx:        0 B,   0 pkts

We see that she has raw (Rx) packets in the spool directory. However, not all so simple. Although we have a network of friends (F2F), trust, but verify. You must explicitly allow the given node to send files to you. To do this, Alice must add to Bob's section of the configuration file an indication of where to place the files transferred from him.
bob:
  id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ
  exchpub: GQ5UPYX44T7YK5EJX7R73652J5J7UPOKCGLKYNLJTI4EBSNX4M2Q
  signpub: 654X6MKHHSVOK3KAQJBR6MG5U22JFLTPP4SXWDPCL6TLRANRJWQA
  noisepub: M5V35L5HOFXH5FCRRV24ZDGBVVHMAT3S63AGPULND4FR2GIPPFJA
  incoming: /home/alice/bob/incoming

After that, you need to start processing incoming encrypted packets with the nncp-toss command (similar to tosser in FidoNet):
alice% nncp-toss
2017-06-11T18:49:21Z Got file ifmaps.tar.xz (350 KiB) from bob
2017-06-11T18:50:34Z Got file bobnode-20170611.zfs.xz (1.1 GiB) from bob

This command has a -cycle option that allows it to hang in the background and regularly check and process the spool directory.

In addition to sending files, it is also possible to request a file transfer. To do this, you need to explicitly write a freq (file request) entry in the configuration file for each node from which directory files can be requested:

bob:
  id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ
  exchpub: GQ5UPYX44T7YK5EJX7R73652J5J7UPOKCGLKYNLJTI4EBSNX4M2Q
  signpub: 654X6MKHHSVOK3KAQJBR6MG5U22JFLTPP4SXWDPCL6TLRANRJWQA
  noisepub: M5V35L5HOFXH5FCRRV24ZDGBVVHMAT3S63AGPULND4FR2GIPPFJA
  incoming: /home/alice/nncp/bob/incoming
  freq: /home/alice/nncp/bob/pub

now Bob can make a request for a file:
bob% nncp-freq alice:pulp_fiction.avi PulpFiction.avi
2017-06-11T18:55:32Z File request from alice:pulp_fiction.avi to pulp_fiction.avi: sent
bob% nncp-xfer -node alice /mnt/media

and Alice, after processing incoming messages, will automatically send him the requested file:
alice% nncp-toss
2017-06-11T18:59:14Z File /home/alice/nncp/bob/pub/pulp_fiction.avi (650 MiB) transfer to bob:PulpFiction.avi: sent
2017-06-11T18:59:14Z Got file request pulp_fiction.avi to bob

There is no functionality to send a list of files, but users can always agree on it, for example, by saving the output of ls -lR in an ls-lR file in the root directory.

Now, let's imagine that Alice and Bob know Eve (but this Eve is good, not bad cryptographic, because we have a network of friends), but Alice does not have direct contact with her. They live in different cities and only Bob supervises between them from time to time. NNCP supports transit packets similar to Tor's onion encryption: Alice can create an encrypted packet for Eve, and put it in another encrypted packet for Bob, indicating that it should be sent to Eve. The length of the chain is not limited and the intermediate participant knows only the previous and subsequent link in the chain, not knowing the real sender and recipient.

The indication of the transit path is specified by the via entry in the node. For example, Alice wants to say that Eve is available through Bob:

bob:
  id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ
  exchpub: GQ5UPYX44T7YK5EJX7R73652J5J7UPOKCGLKYNLJTI4EBSNX4M2Q
  signpub: 654X6MKHHSVOK3KAQJBR6MG5U22JFLTPP4SXWDPCL6TLRANRJWQA
  noisepub: M5V35L5HOFXH5FCRRV24ZDGBVVHMAT3S63AGPULND4FR2GIPPFJA
eve:
  self:
  id: URVEPJR5XMJBDHBDXFL3KCQTY3AT54SHE3KYUYPL263JBZ4XZK2A
  exchpub: QI7L34EUPXQNE6WLY5NDHENWADORKRMD5EWHZUVHQNE52CTCIEXQ
  signpub: KIRJIZMT3PZB5PNYUXJQXZYKLNG6FTXEJTKXXCKN3JCWGJNP7PTQ
  noisepub: RHNYP4J3AWLIFHG4XE7ETADT4UGHS47MWSAOBQCIQIBXM745FB6A
  via: [bob]

Alice does not know exactly how Eve contacts Bob and there is no need to know: somehow the messages must reach her and Bob sees only the facts of sending transit traffic. Bob's outgoing message to Eve is created automatically by nncp-toss after processing the message from Alice.

If we are talking about good security, then for this you need to use computers with an air gap (air-gapped) - not connected to data networks, ideally having, for example, only CD-ROM / RW. And “in front” of them is a computer into which flash drives or traffic from other nodes are stuck: on it you can make sure that the flash drives do not contain anything malicious, and if it is connected to the Internet or another network, then OS vulnerabilities cannot compromise the air-gapped computer. Only transit packets come to such a node, which will generate outgoing messages for an air-gapped computer written to a CD.

If you add a section to the configuration file:

notify:
  file:
    from: nncp@bobnode
    to: bob+file@example.com
  freq:
    from: nncp@bobnode
    to: bob+freq@example.com

then notifications about transferred files will be sent to bob+file@example.com, and notifications about requested files will be sent to bob+freq@example.com.

NNCP can be easily integrated with a mail server for transparent mail transfer. Why is regular SMTP not suitable, because it is also store-and-forward? The fact that you can’t use flash drives (without dancing), the fact that SMTP traffic, like email messages, is not very compact (binary data in Base64 form), plus it has quite serious restrictions on the maximum waiting time for mail delivery. If you are a village in Uganda and a courier with a flash drive comes to you once a week, then SMTP will not work here, and NNCP will just send a bunch of letters for each resident of the village and take them back to the city.

The nncp-mail command is used to send mail In fact, it is exactly the same file transfer, but sendmail is called on the target machine, instead of saving the message to disk, and the message itself is compressed. Setting up Postfix literally takes a few lines :

  • master.cf specifies how to use the nncp transport (how to call the nncp-mail command)
  • for the given domain/user it is set that this nncp transport should be used
  • indicate that the specified domain or user is transit (relay)

On the target machine, it is necessary to set the path to the command sending mail for the given node (if it is not set, mail will not be sent):
bob:
  id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ
  exchpub: GQ5UPYX44T7YK5EJX7R73652J5J7UPOKCGLKYNLJTI4EBSNX4M2Q
  signpub: 654X6MKHHSVOK3KAQJBR6MG5U22JFLTPP4SXWDPCL6TLRANRJWQA
  noisepub: M5V35L5HOFXH5FCRRV24ZDGBVVHMAT3S63AGPULND4FR2GIPPFJA
  sendmail: [/usr/sbin/sendmail, "-v"]

This transparent approach allows you to completely get rid of POP3 / IMAP4 servers and email clients that save letters as drafts when you need to transparently receive and send mail between your mail server and laptop. The mail server always looks at the Internet, and the laptop does not matter how often it connects: NNCP will accept the mail and send it to its local sendmail, which will deliver it to the local mailbox. And similarly, sending: falling messages to the local mail server on the laptop will be stored in the NNCP spool directory and will be sent to the server as soon as possible (from the point of view of the mail client, the mail was successfully sent immediately). Plus compressed mail traffic.

The use of portable drives is not always convenient, especially now that we still have a workable Internet. NNCP can be conveniently used to send packets over TCP connections. To do this, there is an nncp-daemon daemon and the nncp-call command that calls it.

To exchange encrypted packets, one could use the rsync protocol, possibly over an OpenSSH connection, but this would mean some kind of crutch, plus the next link and the next keys for node authentication. NNCP uses a standalone SP synchronization protocol ( sync protocol ) that is used over a Noise-IK encrypted and authenticated communication channel. Although NNCP packets are encrypted, it’s not good to show them publicly in communication channels - that’s why there is an additional layer above it. Noise provides perfect forward secrecy (PFS, when the compromise of Noise's private keys will prevent previously intercepted traffic from being read) and two-way authentication: strangers will not be able to connect to you and send or try to receive something.

The SP protocol tries to be as efficient as possible in half-duplex mode: as much data as possible should be sent in one direction without waiting for anything in return. This is critical for satellite communication channels, where protocols that wait for confirmation of acceptance or send requests for the next piece of data completely degrade in performance. However, full duplex is fully supported, trying to utilize the communication channel in both directions.

SP tries to be economical in the number of data packets sent: already at the moment of the Noise-IK handshake, lists of packages available for download are immediately sent, and download requests are sent in batches.

SP is designed to work in error-free communication channels: it does not have to be TCP - anything can be used as a transport, as long as there are no errors. Packets with broken integrity will not be taken into processing. Successful acceptance of the packet is reported to the opposite side only after checking the integrity.

The protocol allows you to continue downloading any package from any arbitrary location. That is, when the connection is broken, the most noticeable loss is the TCP + Noise handshake, and then continuation from the same place. As soon as a new packet appears on the node to be sent, the opposite side is immediately (once per second) aware of this, allowing it to interrupt the current download and queue a higher priority packet.

% nncp-daemon -nice 128 -bind [::]:5400

the command raises a daemon that listens on all addresses on TCP port 5400, but passes packets with a nice level no higher than 128. If the possible daemon addresses of a given node are known, then they can be written in the configuration file:
bob:
  id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ
  ...
  addrs:
    lan: "[fe80::be5f:f4ff:fedd:2752%igb0]:5400"
    pub: "bob.example.com:5400"

and then call the node:
% nncp-call bob
% nncp-call bob:lan
% nncp-call bob:main
% nncp-call bob forced.bob.example.com:1234

The first command will try all addresses from addrs, the second and third explicitly say to use lan and pub entries, and the last one says to use a well-defined address regardless of the configuration file.

The daemon, like nncp-call, can be set to the minimum necessary nice level: for example, only pass mail messages over the Internet channel (assuming they have a higher priority), and leave the transfer of heavy files to portable drives or a separately launched daemon that listens only in a fast local networks. You can specify the -rx and -tx options to nncp-call to make it only receive or only send packets.

Unfortunately, in the SP protocol, there is no way for the parties to agree that the connection can be closed. Now this is implemented due to timeout: if both nodes have nothing to send and receive, then after a specified time they break the connection. But this time can be set with the -onlinedeadline option for a very long time (hours) and then we will have a long-lived connection in which notifications about new packets will be sent immediately. This saves on expensive protocol handshakes. In the case of sending mail, this is also a very fast notification of incoming mail, without the need, as in POP3, to constantly do polling, breaking the connection.

TCP connections can also be used without an Internet connection. For example, with a little anointing with shell scripts, you can set up an ad-hoc WiFi network on a laptop and listen to the daemon on an IPv6 link-local address, trying to constantly connect to another known address. And while riding the subway on an escalator, two laptops, relatively quickly, can see each other in a common ad-hoc network and quickly “shoot” each other with NNCP packets. WiFi is quite fast, so even in a matter of seconds you can transfer large amounts of mail alone. And if this is an electric train car, where the same people regularly travel to or from work every day at about the same time, then there may be tens of minutes of constant communication. But do not forget that, by transferring a two-terabyte hard drive every day,

Another standout utility is nncp-caller . This is a daemon that calls neighbors via TCP at a given time, at given addresses, with given transmission / reception parameters. Its configuration is written for each required node using cron expressions. For example:

bob:
  id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ
  ...
  calls:
    -
      cron: "*/10 9-21 * * MON-FRI"
      nice: 128
      addr: pub
      xx: rx
    -
      cron: "*/1 21-23,0-9 * * MON-FRI"
      onlinedeadline: 3600
      addr: lan
    -
      cron: "*/1 * * * SAT,SUN"
      onlinedeadline: 3600
      addr: lan

This call description says the following: every ten minutes on weekdays, during business hours, contact Bob (at the public address so that an attempt to connect to the local IPv6 link-local address does not give his address to the network), accepting only high priority packets (probably mail ), but without sending anything; during non-working hours, try to contact him by LAN address, accepting packets of any priority and holding a TCP connection (there is no need to spend money on handshakes), for at least an hour of downtime; on weekends, work around the clock only with his LAN address.

Such call descriptions allow you to flexibly control when and how to communicate with whom. If there is a telephone line between the nodes, then the call time is directly reflected in the cost. Unfortunately, now the call to non-TCP addresses is not implemented, but this will definitely be written in the addrs section of the configuration file.

Of course, not all functions and scenarios for using NNCP are considered. Packet formats and SP protocol are not considered at all. The main principle behind the creation of these utilities is: KISS and the boycott of complexity. While this set doesn't do everything Unix-way and takes a lot of work beyond the bare minimum required for store-and-forward networks, it's purely to save on additional dependencies.

Просмотры:

Коментарі

Популярні публікації