Categories
Asides

Rethinking the Practical Balance Between Decentralized Communication and Central Relays

Messengers that operate on mesh networks using P2P communication already exist. Under the right conditions, they can function independently of existing communication infrastructure and offer strong resistance to censorship and shutdowns. They feel like products that intuitively point toward the future of communication.

At the same time, this approach has clear limitations. Communication only works reliably if a sufficient number of devices act as relay nodes, which means stability is limited to closed spaces or short periods when many people are densely gathered. When considered as everyday, wide-area communication infrastructure, instability remains a fundamental issue.

A very different and more practical answer to this constraint emerged in the form of messaging systems that ensure communication continuity while maintaining full end-to-end encryption. Signal is a representative example. Signal did not achieve security by eliminating central servers. Instead, it chose to accept the existence of central servers while removing them from the trust model altogether.

Signal’s servers temporarily relay encrypted messages and store them only while recipient devices are offline. They handle minimal tasks such as distributing public keys and triggering push notifications, but they cannot read message contents or decrypt past communications. Central servers exist, yet they function strictly as relays that cannot see or alter what passes through them.

This structure is supported by the Signal Protocol. Initial key exchange is completed entirely between devices, and encryption keys are updated with every message. Even if a single key were compromised, neither past nor future messages could be decrypted. Even if servers stored all communications, the data itself would be meaningless.

What matters most is that “trust” is not assumed at any point in this design. Signal does not rely on the goodwill of its operators. Client software is open source, cryptographic specifications are publicly documented, and reproducible builds make tampering verifiable. The principle of “don’t trust, verify” is embedded directly into the system.

This design avoids the extremes of both pure P2P and centralized control. It does not accept the instability inherent in full P2P networks, nor does it allow the surveillance and control risks that centralized systems introduce. Central relays are permitted, but they are rendered untrustworthy by design. It is a highly pragmatic compromise achieved through cryptography.

Meanwhile, new approaches are emerging that extend communication infrastructure into space itself. Satellite-based networks like Starlink bypass traditional telephone networks and terrestrial infrastructure altogether. This shift has implications not only for business models, but also for national security, privacy, and sovereignty. When the physical layer of communication changes, the rules that sit above it inevitably change as well.

Since the invention of the telephone, communication has evolved many times. It has repeatedly moved back and forth between centralization and decentralization, searching for workable compromises between technology and society. Neither absolute freedom nor absolute control has ever proven viable in reality.

That is why the question today is not “which model is correct,” but “where should the practical balance be placed.” By embedding trust into cryptography and treating central infrastructure as a necessary but constrained component, it becomes possible to preserve both freedom and stability. Communication continues to evolve, once again searching for its next form somewhere between these two forces.

Categories
Asides

The End of Skype

It’s worth taking a moment to remember why Skype was once so groundbreaking.

Most obviously, it offered free voice calls.

International calls used to be expensive, but Skype allowed near real-time, high-quality voice calls for free. That might feel ordinary now, but at the time, it was a small revolution.

What made that possible was its use of P2P communication.

That allowed Skype to reduce infrastructure costs, avoid centralized control or censorship, and enable rich, decentralized interaction—voice, chat, file transfers. As a side effect, latency was low and call quality was good.

Not everything was fully P2P, and many parts of the design were still immature. Its authentication and encryption mechanisms had room for improvement, and there were real security concerns.

Still, what mattered was that it worked.

It actually got used. And the way it spread—so quickly and globally—was a perfect example of network effects in action. It also proved just how deeply information and communication technology can reshape human society.

Later, Skype was acquired by Microsoft, and the architecture changed. The uniqueness of its P2P nature gradually faded. It became a cloud-based, business-oriented service.

Given the demands for security, interoperability, and enterprise compatibility, that shift was probably inevitable—but in the process, Skype lost what had made it truly Skype.

As a product grows, and as disruptive technologies get folded into business structures, trade-offs emerge. Skype showed that whole journey in real time.

Whether people use it today, or whether it’s being absorbed into Teams, is not the point.

Skype had already abandoned its P2P model long ago. But the fact that one of the defining examples of peer-to-peer communication is now vanishing from internet history feels unsettling.

And still, I find it curious that so many of the internet’s most disruptive technologies seem to emerge from Europe. Skype was one of them. It’s an interesting pattern.

Exit mobile version