Down two Rabbit Holes: Crypto & CO2

In the public knowledge, Bitcon is the slowest, most expensive, and most energy consuming database on the planet. From my perspective this problem is solved with switching from “Proof-of-Work” to “Proof-of-Stake”; although Bitcoin will continue with “Proof-ofWork” – maybe even there are some interesting Bitcoin climate options.  

Crypto technology and organizations promise or already offer: 

  • transparent public ledgers for trust building
  • International payment and trading systems
  • Independent institutions from national authorities organized in Decentralized Autonomous Organization (DAO)
  • Smart contracts build on open source
  • Highest innovation speed based on thousands of talented enthusiasts
  • Evolution of today’s decentralized finance into regenerative finance

and will therefore be relevant for the CO2 removal market. Both the crypto technology (see unofficial state-of-the-art messari report) and the CO2 Removal market are in early innovation phases. I dived deeper into both rabbit holes and captured three possible benefits of crypto technology for the CO2 Removal market:

  1. Keeping records of CO2 removals in digital ledgers in order to report and auditing CO2 removal activities.
  2. Using Non-Fungible Tokens (NFT) to capture a CO2 removal activity
  3. Using CO2 crypto tokens to trade CO2 removals independent of national currencies   

The real revolution behind crypto and web 3.0 is its new approach of highly motivated people to collaborate fast and intensely: Decentralized Autonomous Organizations (DAOs).

But on the down-side, both rabbit holes share a major communication problem: It is impossible to understand what is going on here without learning the special tribal language and concepts. My recommendation is to start with the Carbon Removal Primer and Normie’s Guide to Becoming a Crypto Person. Good luck, don’t get stuck too long in the rabbit holes. I did!

HDD Booting with Raspberry Pi 3 Model B for Home Assistant

…But then my SD-card SanDisk 16GB (Type 10 A1) crashed and I looked into the alternative for running Home Assistant on a Raspberry Pi 3 Model B (RaPi3B)

For me it is clear that the heavy file access of Home Assistant in my context will not work long term with any SD card. Therefore, I was looking for a simple HDD solution. There are many complicated solutions described in the web, but I found a very simple and straight forward approach which is written in German. Therefore, I translate the essentials for you into English.

First, you need to take care of the right power supply as this is the main hurdle for Rasperry Pis working with HDD. The RaPi3B needs 330 mA for running and offers up to 600 mA for USB and other external peripherals. This adds up from HDMI 50 mA, Ethernet 50 mA, or Mouse 50 mA. You can increase the peripheral limit to 1’200 mA but overall, you can not get more out of the RaPi3B than 2’500 mA at 4.75 – 5.2V or 11 – 13W. But of course your USB power supply must be able to deliver! And therefore, you better use the official Raspberry Pi 3 Model B which delivers 2500 mA at 5.1V and not an USB charger! But even this might not be enough to run a HDD drive. Therefore, I picked the simple solutions: an external HDD with Y-cable: two USB plugs, one for DATA and one for POWER.

Intenso Memory House 320GB external HDD with Y-USB cable and Hitachi HTS545032B9A300 inside
HDD Booting with Raspberry Pi 3 Model B for Home Assistant weiterlesen

„One to rule them all“

What a great experience with Home Assistant! Just after 1h I had a couple of integrations working „out of the box“ just by configuring user names, API-keys, or passwords:

„One to rule them all“ weiterlesen

Meteoplug = Meteohub + DreamPlug Online

It took me more than two years to find a better solution to run a local data collection server in my home. Up to now I used an old laptop with all energy saving options available. But this turned out to be noisy and not really reliable. Moreover the software that comes with my Weather Station and Energy Control on Windows is buggy like [….]

My newest gadget is a DreamPlug with preinstalled Meteohub software. This collects now data from my Weather Station and Energy Monitor. Believe it or not: It is a Debian Linux system and was plug-and-play!  Meteohub pushes every 5 Minutes a GNU-plot generated .png file via FTP. Here is the latest version

Latest Temperature Measurement from my Weather Station

Next step is to make a nice dashboard…

Dualism of virtual and physical existence

Here comes another facet of the dualism of information about someone or something versus the physical existence. I strongly belive that in web 3.0 we will confuse or favor the virtual existence over the physical. Some days ago I have visited mobile metrix  and have read „More than 1 billion people in 120 developing countries have no official record of their existence“. This project has the vision to improve poor people live with an official record or I would call it a virtual existence. The news page even says: „How do you serve people if you don’t know they exist?“ I am impressed.

Additional temperature sensor and new pachube feed

As I’d like to make use of the senors already available in the house I got a „Pt1000“ test sensor connected to my C-Control. Unfortunately, the C-Control features only a 8-Bit A/D-Conversion giving pure accurency at normal temperatures. Anyway, I added a new pachube feed because I couldn’t extend my old one. Please watch now here . I will leave the old feed as it is for documentation purpose. The software I posted here and all the example pictures use the old feed.

An Unheroic Story: Connecting Java to C-Control

After the end of the year parties I had some time to continue my web 3.0 project. In a very incremental way I want to connect my Conrad C-Control V1.1 which features a temperature, humidity, and air pressure sensor to the pachube service.

After looking at several alternatives to implement the connection from the C-Control to the Internet I decided to use the serial port of the C-Control, connect it with my PC and run a Java application to send the XML-data to pachube.

Of course there have been some technical alternatives (Perl, C++) but I decided to build on Java as I would learn most how easy I could integrate the old embedded technology with the current Internet standard.

First I installed the newest Eclipse IDE Version: 3.4.1 Build id: M20080911-1700 from This turned out to be easy even though I had already the Eclipse CDT for C/C++ development. I just have two instances of Eclipse in separated sub-folders: one for Java and one for C.

The I started browsing for data streams in Java to read input from the PC serial port. Boy, I was shocked. There is NO serial or parallel interface support in standard Java. The only Sun support is an outdated and officially not supported Sun Java Library called Java Communication 3.0 API. For Windows one has to relay on the open source development from So first I registered as a Sun developer to acquire the Sun comm API 2.0.3. Therefore, the Sun download tool was installed on my PC. After that I struggled quite a while to find the place in Eclipse to add the library to my project. It took me even longer to find the promised example for using the serial port interface…

The I googled for the rxtx windows support and had look to find Eclipse plug-ins: The the real unheroic story begin to start. I was not able to direct the rxtx windows dlls to the Sun Comm API. After reading dozens of bug reports, howtos, and news I went to bed ;-(

Next day I found a pretty simple solution: Do not use the Sun Comm API at all; instead use the .jar-file from rxtx. After importing the sample java file Eclipse was happy to compile the executable. But this didn’t mean that things worked out already. The input stream from the serial port captured by Java had very strange characters (NULL and others) though the Windows Terminal Software did not have these problems.

After another couples of hours in bug reports and news postings, some testing here and there I found a bug in the „official sample“ file. The readBuffer size did not fit well the characters really read from the port. Quite annoying…

Overall it took me about ten hours to display the serial port input stream on the PC screen. The code to do this has 195 lines. In Perl this would be a small exercise of 10 minutes at most. Stream in C++ would have been much easier also.  My first conclusion: Java is not ready yet for embedded world at all! The code is here:

The next story will be the XML-Output in eeml-format for the pachube web service.

Definition of web 3.0

There are many discussions and definitions out there which define the term „web 3.0“. I think it is much to early to judge as I can recognize only convergence in technology and who should know what this finally means…

Here are some theses I’d like to share.

First, I believe that today’s millions of embedded software systems and the Internet will get connected. Many people think that Internet invades embedded systems. I see it differently. As embedded systems are much more natural to work with it is the embedded systems replacing PCs and laptops while using the same network infrastructure created for the PCs.

Often people say the web 3.0 is the Internet of things. e.g. this quote „It’s not the documents, it is the things they are about which are important“ from timbl’s blog. Even stronger is the concept of RFID chips in each object of the world. Of course this gives a unique ID to every peace on the planet but how to make use of this.

Other people argue that web 3.0 is all about semantic webs.

I am thinking more like a dualism in web 3.0 of 1. physical objects connected and 2. information about the objects and there connection. Only if objects are somehow accessible in the Internet and semantic meta-data about them is in the Internet the story will work. This gives some heavy philosophical questions. There has been the century long discussion about existence and essence which is strongly linked to the fact what human can observe to be existent. If you put the above bold „and“ literally this would mean a object is only existent if a human can physically recognize it and its semantic meta-information is in the Internet. The latter could even be the essence of the object. If you think this is strange, I have an interesting question: How much of you is physically visible, touchable, or in general sens-able and how much information about you is in the Internet. If you are a very famous person there is a lot information in the Internet. This might even be wrong but has a strong impact on you „physically“. Dualism looks like the right concept for this phenom.

Finally, the development of web 3.0 depends heavily on the societies trust, security, and stay-in-charge needs.  It looks like that you people are very open to share very private information in the web. Older people are much more reluctant. Data security flaws are not new but the impact and the amount of data is much higher today putting much higher risks at stakes. I see this a the wild card in the game.