Upgrading to Fedora 39

Lessons of upgrade issues discovered

Why nVidia is not a viable option on Linux

Let’s begin by saying that this is not how to upgrade. Rather, this is a discussion of issues a long-time Linux fan can discover in this complex ecosystem.

The trap of nVidia video cards

One of these issues, the core root, cannot be easily addressed by the Linux community. Many of us bought nVidia GPUs and filled our machines with them. The hardware is very high quality. So high quality, nearly all my nVidia GPUs work just as well today as the day I bought them at producing good video on 1080p displays. Notably, I purchased three GeForce GTX 550 TI cards, originally intending to use SLI, but ultimately spread them among three different machines. Today, they continue to produce video and meet my needs, as I type in my primary workstation using one to write this blog. So, what’s wrong with nVidia?

The issue is when I went to upgraded to Fedora 28, released in May 2018, I discovered after many times dealing with upgrading the native drivers whenever I upgraded Linux that it was no longer an option for my cards. nVidia decided to no longer support native drivers for my cards beginning with Linux kernel 5.0, which Fedora 28 used. This left me stuck for all these years on Linux 4.9, which again, I am typing to you today on. Yes, I’ve been stuck for nearly 6 years on Fedora 28, but with the Linux 4.9 kernel, and unable to upgrade Linux since.

I’ve worked around this in part by using flatpak, snap and brew, keeping me productive as the primary Fedora 28 DNF repos quickly became useless.

The Plan to Upgrade

Enter the plan to upgrade. Hardware wise, I needed another SSD, as I always install a new drive for a major Linux upgrade allowing me to easily multi-boot between installs (thus, here I am back at Fedora 28 writing this after installing Fedora 39), while also allowing me to share a large part of my file system via mounts and symlinks to ease the migration. I develop software and run a lot of complex things that can’t just be configured in a day.

But, the key to this plan for success is to never ever ever again buy an nVidia card. But, rather to go all in on AMD cards, which I’ve been told work flawlessly with Linux without the need for any native drivers due to their commitment to the open source community. While there were concerns about losing CUDA, the reality is I haven’t used it much. And, today, you can get AMD cards with Tensor cores for AI, ray tracing, and a lot of other new bells and whistles.

Nonetheless, because I don’t have time for gaming, and I’m not doing much that depends on AI inside the PC today, I chose a simple cheap upgrade for now so I can save money for a future gaming/AI beast with a much better card later. Thus, I chose the no-frills on-sale 6500 XT for $135.

Trying Fedora 39 Live

After purchasing the AMD GPU and putting it in another slot, with the nVidia card still in there, I created a bootable flash drive with Fedora 39 on it, and tested the waters. Upon boot, I could immediately see the 6500 XT via lspci.

However, there were issues with Fedora 39 Live. When I plugged in a third monitor to the AMD, Fedora saw it. You could see the Fedora logo as it booted on all 3 monitors. But, when it went to the login screen, only the 2 screens on the nVidia card showed completely. On the screen on the AMD card, it was black, but I could clearly see the mouse when I moved over it. The mouse cursor is the only thing it could render. Unfortunately, for whatever reason, it kept picking it as the only monitor it put the “invisible” login screen on. But, I could guess what was there, hit ENTER, type my password, hit ENTER, and now get in.

Again, I could see the Fedora clearly on the 2 nVidia screens, but not the AMD screen, but could still see my mouse when I moved over it. Of course, it again chose the AMD screen to make the primary screen. Fortunately, you can point your mouse to the background of any screen, right click, and get into Display Settings where you can change your primary monitor to get passed this.

I had hoped this was a temporary problem on the Live edition that would clear up once I installed it and updated it. Thus, I ran out and bought a new M.2 SSD and put it in the MOBO, and installed Fedora 39 on it. The rest of this story is using a completely updated Fedora 39 install. Long story short, the complete install and update in itself did not clear up the video issues. Yet, progress continued.

Wayland vs X11

I searched a lot online. But, most people having “black screen” or related issues were nVidia users. And they were all told to install the native drivers, which, I suspect, many of them were on the exciting journey of discovering that their latest upgrade took them over the edge of nVidia’s support for their video card.

So, I decided to drop back to X11. I could get the login screen to show on an nVidia screen if I unplugged the AMD screen, so was able to tell it to use X11 then log in. Then, after rebooting with the AMD monitor, and typing my password into the invisible screen, BAM, I could now see my AMD monitor in full color and glory! Yes, AMD does indeed work with Fedora 39, but only with X11. Why? I do not know. This problem is currently unresolved.

But, this isn’t a show stopper. I learned how to configure the login screen to also use X11, and thus, I’m able to fully enjoy my AMD card with Fedora 39. Obviously, this is not a long-run solution with IBM RedHat clearly backing Wayland, but is certainly a better place to be than on Fedora 28 constantly trying to get newer things to work.

Computer Freezing

Unfortunately, this dream didn’t last long, as the computer kept freezing after 10 minutes or so. All the evidence, however, is that this issue is video related, as people with this issue with nVidia were told to install the native drivers, which obviously wasn’t an option for me on Fedora 39 and Kernel 6.8.

Linux defaults to booting with nouveau drivers. These are open source drivers for nVidia cards that attempt to provide core capabilities, but cannot provide all the bells and whistles due to the native drivers not being open sourced. nVidia announced a few years ago they would finally open source their video drivers. What people say isn’t news. It is what people do that can change the world. Today, it does not look like there was a lot of progress on the open source nVidia drivers.

Bottom line, however, is the Fedora 28, while very very out-dated, is completely stable. I can run without a reboot 24/7 for a month easily, using the computer from morning to night. But, I cannot run in Fedora 39 for 10 minutes without a freeze running default nouveau and AMDGPU drivers. Process of elimination…

So, I decided to unplug the monitors from the nVidia card and just boot up with one monitor on the AMD card. Now I had unprecedented stability. I installed codecs and such like I normally do on a new install, and watched videos and fell asleep to Tubi TV, which continued to play in the morning when I woke up. I was pretty happy. Problem solved! Or, so I thought.

Later this morning it locked up again. This time, notably, I was interacting with it with the mouse viewing videos on Odysee. Clearly, it is more stable without nVidia in the picture. But, not completely stable.

I rebooted. Tried again. It went awhile, then froze again. Now, one thing I noticed is that when it freezes, it is originally just the video. The screen quits updating. You can’t move your mouse and see no video, even if a video is currently playing. The audio of a video can keep playing. In fact, in one video, I heard the audio continue for 5-10 minutes. Eventually, though, given enough time, the audio will die as well. Also, right before it freezes, you sometimes get video signs, like trails on a window you are dragging.

Upgraded, yet on Fedora 28

After plowing $300 on a new GPU and M.2 SSD, I’m still on Fedora 28, because it is rock solid stable. Granted, this is only day 2. I’m very hopeful for a miracle and will reach out to the Fedora community to hopefully discover the cure. I’ll update this blog.

Posted in Technology | Tagged , , , | Leave a comment

Principles of Communications when Developing Messaging Applications

This is an intro on the challenges and options to overcome while developing applications involving digital communications, particularly real-time.

There are many streaming communication protocol options to choose from, including regular socket communications, WebSockets, MQTT and a Vert.x event bus. I’ll focus on dedicated connections and leave request/response paradigms such as REST and gRPC out of this discussion. Note that RESTful APIs have been evolving to support streaming capabilities that in some ways overlap the use cases, such as StreamingResponseBody, WebFlux or using Mutiny’s Multi with Resteasy Reactive.

Recovering from a Disconnect

Because we’re talking about dedicated connections such as WebSockets, you’ll likely want some way for your clients to retry connecting in the case of loss of network continuity, server downtime or anything else temporarily impacting reachability.

In the early days of WebSockets we had to handle this ourselves. You can simply detect a premature disconnect and have a retry process in your connectivity, which you can begin when you detect you have been unexpectedly disconnected. However, today, there are libraries that help with this such as socket.io.

You’ll want to be sure you can broadcast these events to the UI. In a trading system I created, I show a connected icon when connected so you always know you have a valid connection. I also like to show the timestamp out of the last received message.

How you handle recovery once you have a valid connection re-established can vary on use case. If you only care about the latest state, you’ll simple want to be sure you receive the latest state upon re-connect. For this reason, if you re-subscribe to a feed such as live stock quotes, you’ll want to always get the current stock quote once the subscription begins even if the price hasn’t changed.

While much of this is handled at the application level and not the third-party protocol level, libraries such as socket.io can assist with conventions such as a mechanism for letting the client request all data since the last message ID it received or handling some server-side message backlog (buffering) for you.

If your data is being persisted, you can let the client query any data they missed upon reconnection to fill any gaps they need so long as they can provide parameters that describe the point where they last received a message to the point of the first message after reconnect.

Handling Backlog

If your server-side builds backlog of messages when the client is consuming them slower than your server is producing them, you’ll need a way to tackle it.

Note that all the libraries vary on how they support this, so you’ll want to understand your backlog requirements ideally before picking one. In many cases, you’ll depend on your own server-side logic for handling it.

For instance, in a video pipeline for one application, I implemented an H264 frame dropper that detects lag and adjusts the rate of dropping as well as the priority for types of video frames to drop based on how far behind it is. Our goal here is live video, and if we didn’t drop once we were 5 minutes behind we’d always be 5 minutes behind. Dropping in a way that provides the best user experience can be complicated, and no third-party library could do this for us. Dropping on the server side helps us better mitigate lag due to network contention, such as when the user is on a poor wi-fi connection.

Unlimited backlog can also be a problem as you’ll run out of memory on the server, especially with something like a video pipeline for high FPS high resolution cameras, or if you have thousands of clients subscribing to a feed, and have to handle their backlog individually.

Some libraries, such as RSocket, offer flow control which can help you make better decisions on the server end as well as help support topologies such as load balancing.

To ACK or not to ACK

This can help provide message delivery guarantees. However, it comes at a cost. If your messages are ordered (a pipeline), then you can introduce latency that can become significant if, for instance, you are catching up on backlog.

There are also cases where you need it, just not at the communications level. For instance, in trading, the client needs acknowledgement when it places an order. What it really needs to know though, isn’t that its order was received by the server endpoint, but by the order fulfillment system. You need to know that the order is in process and has certain transactional guarantees. For this reason, you’ll prefer an order confirmation message over a protocol ACK. Until then, you’re client will consider it “pending receipt”.

Socket.io provides ACK via a callback option and timeout. Vert.x also offers a reply option. Though, in practice, I don’t use it for pipelines. I do use it for request/response semantics, often to report success or failure of an operation driven by the message, and sometimes a rich response as a result of a query.

Text or binary

There are use cases where binary can increase performance or throughput of network connections. It can also help scale where network bandwidth is a primary constraint. It can decrease egress costs in the cloud.

However, there are many cases where text does the job just fine. We were streaming video via binary but then had to change very quickly to a different connector due to limitations on certain network topologies and limited availability of fast time-to-market options. This meant using Base64 in JSON, increasing our bandwidth consumption about 30%. To our surprise, it handled our load well, even with many clients streaming 4K. Of course, we had an H264 dropper for cases where clients had really constrained bandwidth. Yet, this worked for our clients. In testing, I had no problem streaming real-time high resolution video over VPN and wi-fi.

The lesson here is if you need binary, you need it; if you don’t, you don’t. It only provides benefits if you need it. Text can be simpler to develop if you don’t need binary.

The Future of Communications

We’ve come very far in capabilities in the past 10 years. For an individual client, it’s hard to imagine a use case where the benefits of low-latency high bandwidth real-time communication is more apparent than a user viewing real-time video and then using PTZ to move the camera in another part of the world, where they can watch the camera respond to their commands; and today’s technology can do just that.

While the Internet is not up-to-par on with a 100 Gigabit LAN connection to a commodities exchange, it is good enough and fast enough for retail traders to profit, security personnel to protect high value assets, and for people to navigate their cars around traffic congestion.

Thus, I expect continuous improvement in lowering network costs in cloud environments, improved scalability to handle thousands if not millions of clients, and an increase in IoT data collection.

For this reason, I’ll end by introducing the latest kid on the block, MQTT. This simplified technology has become the de facto way to collect IoT data from remote devices. Like WebSockets, the clients begin the conversation. It can provide an ACK if needed, can support authorization, and it can broadcast when a client connects or disconnects. Likewise, MQTT clients are easy to add to IoT devices, such as Arduino. It is also very easy to scale on the server side with Kubernetes. Its pub/sub model makes it easy to route messages and it integrates well with other communication technologies such as Vert.x.

In a real-time security monitoring application I added MQTT to our solution so that cameras could transmit JSON analytics (ONVIF Profile-M) when they recognize a person or vehicle, and so IoT devices could transmit their signals.

Posted in Java, Technology, Typescript, Uncategorized | Tagged , , | Leave a comment

Owning your contacts, calendar and tasks

Tech Freedom Series

Owning your data is not just having control over it. It means you don’t have to share it with anyone you don’t want to, including large corporations. Yet, you should be able to share it with all your devices. If you add a contact on one device, it should appear on all your devices without depending on the cloud or a login to Big Tech that can be revoked without prior notice.

You don’t even need Internet to share your data with your devices. As long as you have wi-fi, your devices can talk to each other, passing your new calendar event to them all.

All the mobile apps you need to install for this are available on F-Droid. Here is the overview to help you get started with F-Droid if you have not installed it, yet. Because of this, this works on any Android phone, including phones such as LineageOS.

Syncthing

The first app you want to install is syncthing. Install this on all your devices, including your desktop and laptop computers. This is the backbone that can do many things to restore control over your private life.

The way it works is it can synchronize a folder and its files among all your devices using your wi-fi. When a file changes on one device, the update is propagated to all your devices that folder is shared with. You decide which device can have access to which folder, and which device can write or just read. One common use is you can use it to send all the photos and videos you take on your mobile devices to your regular computer.

In this guide, it will be used to propagate changes to your contacts, calendar and tasks.

Material Files (optional)

While technically optional, this app available on F-Droid is very handy for custom configuring Syncthing. If you are OK storing the data on your internal storage and aren’t concerned with neatly organizing all your syncthing folders, then you don’t need this app. Syncthing will recommend and create a folder for you. Fortunately, this type of data takes up very little space, so won’t dent your internal storage and only requires one syncthing folder shared by contacts, tasks and calendar.

However, if you want it on your external SD card or want to organize your syncthing folders, Material Files has a Copy Path option you can use to copy the folder path you create so you can paste it into Syncthing. Syncthing does not have the ability to navigate your file system to find the folder, so you have to paste the entire path if you want to override the default. This path can be very ugly if on an SD card and virtually impossible to guess from the usual Files apps.

Contacts

You’ll continue to use the contacts app that came with your Android phone. Because contacts is core in Android, it integrates well with other apps that need it, such as your email apps to help you fill in the email addresses of people you are composing email to or the phone numbers of people you call.

On the desktop, on Linux, I recommend Evolution. If on a non-Linux desktop or laptop, you’ll have to find something similar that can do the same things. I can only vouch for Evolution Mail and Calendar app. I’d consider trying Thunderbird next if not on Linux, but can’t vouch it has the options needed to integrate.

Because Evolution handles contacts, calendar and tasks, in addition to its core feature, e-mail, I’ll just focus on Android in the next parts then come back in the end on how to tie the data to Evolution.

Calendar

This is likely to work with the one on your Android device. Go in your Calendar app and click on the About. If it says “The Etar Project” you are golden. If not, then it may or may not integrate with it. Regardless, you can always install the Calendar from The Etar Project via F-Droid if you don’t have a Calendar app or the one you have doesn’t integrate correctly.

Tasks

There are two apps on F-Droid that work well, OpenTasks and Tasks.org.

DecSync CC

This app, also on F-Droid, works with Syncthing to replicate your contacts, tasks and calendar events to all your devices. You’ll also need to run it on your PCs. Their GitHub page has links to the various options. They do support Thunderbird, which you’ll want to try if you are not on Linux. Thunderbird works on all major operating systems.

Note that on Linux, neither Syncthing nor DecSync CC start automatically. You either have to configure it to start on boot, when you log in, or launch it each time you boot your PC. If it is not running, your data will not update between Evolution/Thunderbird and your Android devices, though your Android devices can still update each other as this can start up on Android automatically. Both Syncthing and DecSync CC leave it up to you to find a way to automatically start it, or remember to start it when you boot up and log in. I’ll try to update one day how best to do this on Linux.

Configuring Syncthing

Now that you installed all the necessary components, begin by learning how to get Syncthing to find all your devices. Each device has an ID that you can show with a 2D bar code. If other devices can read bar codes, you can add them via the Devices tab using this. Ultimately, this is just a bar code to a long string uniquely identifying the device. If you cannot add devices using bar code scanning, just copy this ID and use whatever means you have to get it to your other device, and just paste it in to add the device. Do this until all your devices are added to all your devices. After that they’ll have no problem finding each other on wi-fi.

You’ll see an option to make a device an Introducer. Pick only one device to make your introducer and set that on for that device on all your other devices. Ideally, this might be a PC you have on all the time. This isn’t necessary, but can make it easier to configure. I definitely don’t recommend more than one introducer in your cluster of devices.

Next, add a folder to your main device. You can call it “dec” for purposes of propagating your contacts, tasks and calendar. This is where DecSync CC will put all the files necessary for your devices to share this data. For that folder, in the Share tab, share it with all your devices. Do not enter a “trusted” PIN. You don’t need that as these are devices you trust, and will only complicate things. That can be useful, however, for sharing folders with friends online as an extra security measure, but not needed when you are able to verify these are your devices.

Configuring DecSync CC

On each device, use the menu to add your “dec” folder to DecSync CC on each device. It will now put your data in that folder, and syncthing will replicate it to all your devices.

Under Settings, set the Task app to point to the app you chose to use for Tasks.

I can’t remember how I set it up, and will update this post the next time I add a device, but you want to have one collection under Address books, one under Calendars, and one under Task list, and they all need to be checked. You can give them any name, such as Business or Personal, as you might have collections you keep separate for various reasons.

I believe that after you add to one device, once syncthing replicates they will show up on the other devices. But, you’ll need to check the checkbox under each category for it to integrate it with your device.

That’s it for the Android part. If you add a contact, task or calendar event, they should show up, typically within a few minutes, on the other devices.

Configuring Evolution

Despite the Evolution plug-in, I ended up using Radicale DecSync, the option recommended for Thunderbird. If you don’t have Python 3 installed, you’ll need to install it in order to install Radicale. Follow their instructions on their GitHub page.

Per their instructions, you’ll create a file ~/.config/radicale/config and copy the contents from the page into that file. Change the decsync_dir setting to point to the folder you added to syncthing on this PC for this.

I recommend creating an executable script in a folder in your PATH that runs the command so it is easier to launch next time. You can put this in a text file called radicale:

python3 -m radicale –config “~/.config/radicale/config”

Give it execution permissions. Put the file in a folder in your PATH. And now you can just run radicale to launch it once you log in. To run it in the background, just use radicale & from your terminal, and leave that terminal tab open.

You can verify it is running by opening this URL in your browser

http://localhost:5232/

Once you log in (with any username or password), you’ll see a URL for each of your collections. To add the tasks URL to Evolution, copy it. In Evolution, do New Tasklist. Select CalDEV as the type. Give it a name. I recommend adding something like ” (dec)” to the name so it is clear that this is coming from DecSync. Paste the URL in the URL field. Enter anything in the user field. Hit OK. Your tasks should now show up if you have any, and you can add tasks from Evolution.

Do the same with New Calendar, only now copy the Calendar URL from the Radicale web page and paste that in the Evolution dialog.

Do the same with New Address Book. Select CardDEV as the type. Give it a name. Copy the addressbook URL from Radicale and paste into the Evolution dialog. Put in any user name.

You should now be able to view these items in the Contacts, Calendar and Tasks views of Evolution. Be sure they are all checked to enable them.

When you write an email, you can now use your Dec contacts. When you create a calendar event, you can select your Dec calendar to create it in. Ditto for Tasks. You should see items created on mobile devices, and they should see items you create, and any device should be able to edit or delete the items.

Posted in Android, Data, Mobile, Personal, Technology | Tagged , | Leave a comment

Being free of Big Tech

Tech Freedom Series

There are many things you can do to be free of big tech while having very capable mobile devices and home computers. You should own your data and you should not have to share it with or trust large corporations to communicate with others and organize your life.

The guides I’ll be creating are for Android because Apple does not provide the same options to allow you to customize your devices to remove Apple or the other companies from your personal data.

The first thing every Android users can do is install F-Droid. This provides an app repository for installing apps that is a replacement for Google Play. Unlike Play, all the apps on it are free open source. This is effectively enforced because an app can only be listed on F-Droid if it can be built on F-Droid servers, which means F-Droid has to be able to download and compile the source code.

F-Droid also provides other protections It will search the source code to look for things you may not like, such as code that can track you.

You may see scary warnings from Android as you install F-Droid and then install your first app from it. Contrary to the warnings, F-Droid apps are much safer because they are open source. Play apps can be closed source, not allowing you to know what they are doing on your phone. If you want safety, security and privacy, F-Droid is the way to go!

You may need to enable developer mode on your device to install it. This is easy. You basically find the Android build number on your phone and tap it repeatedly until it tells you that developer mode is enabled.

If you install F-Droid on a device that has Play, you’ll still be able to use Play. It does not disable it. You’ll just have two ways to install apps.

I highly recommend, though, that you try to only install apps from F-Droid going forward. Part of this process is changing habits. Not all apps are available on F-Droid that are available on Play. However, before you head over to Play to install it, ask yourself, do you really need it? Is it worth the loss of privacy and other risks that come with it?

The coming guides will show you how to do common things using apps on F-Droid, so you can learn to be free.

Other Posts in the Tech Freedom Series

Owning your contacts, calendar and tasks

Posted in Android, Mobile, Technology | Tagged , | Leave a comment

The New Stacks Open Development Community

participants continue to share the belief that they ‘can do more in concert than individually’

OpenSource.com

I propose developers in the Stacks blockchain ecosystem join together in one identity — The Open Development Community.

Our primary objective is to support each other to help us succeed. Together, our objectives can include creating the best free open source software (FOSS) as the core developers can build on.

To unite us, I propose these values:

  • Community first. People first. Then code. Our culture can support anyone looking to contribute.
  • Transparent. Our discussions of architecture, direction, plans and priorities should be visible with our code.
  • Collaborate across boundaries. All developers are invited to participate in our collaborations, including those who document and test, and those who work for or on other interests. Linux code today is often contributed by employees of companies. Yet, individuals without outside backing are 100% included at all times.
  • A bias for asynchronous. To encourage people from all over the planet to join us, our communications can favor asynchronous communications such as email lists to let all have a chance to participate regardless of time zone.

In the beginning, there is no official join process. If you’d like to contribute to software and believe in our values, you are a part of our community. While our end-goal is to support decentralized apps (dApps) development, all development that supports the Stacks ecosystem is included. This can include the stacks node, APIs, libraries, frameworks, tools, demos, middleware, configurations, templates and virtually anything else imaginable that can be built to help us create a better platform to help create a user owned Internet.

Dear Stacks Foundation,

Our community is independent of the Stacks Foundation. However, we invite people from the foundation to join us, to become part of us, as individual contributors and participants.

While our goal is to be self-sufficient in the long-run, we do welcome contributions that the foundation can offer to our community to help us have the best possible beginning, particularly when working on open source components that have no direct profitable path for the participants involved, yet produce value for the community and the ecosystem and help the overall success of Stacks.

Dear Hiro Systems,

Our community is independent of Hiro and any other organization or company. Yet, we value your contributions and hope you can both join us as individuals and partner with us as an organization to help further our common vision.

Please involve the community in the architecture, development, planning and prioritization of the reference implementation of the Stacks Node and related parts such as the API and libraries. There is a way to achieve synergistic balance where you can continue to develop the reference implementation while empowering the community to achieve faster time-to-market at improving components and innovating.

Our Impact

By collaborating across boundaries, the Open Development Community, Hiro and the Foundation can improve time-to-market, quality, and opportunity for all participants in the ecosystem. Together, we can dramatically improve the chance of success for Stacks, while creating downstream opportunities for profitability for all participants.

With the community’s participation, things like app chains and decentralized finance (DeFi) can quickly come to pass. Options can abound and best of breed can rise to the surface in addition to open standards driven by an open community that together can quickly rise to the challenge. This is a win for all of us as our open community lowers our risk while increasing our potential returns.

We can have a user owned Internet created by a family who builds together.

Where we go one, we go all (WWG1WGA)!

Also posted on the Stacks Forum.

Posted in Crypto, dApps, Development, Technology, Web | Tagged , , | Leave a comment

Stacks Blockchain API Architecture – WebSocket message flow

This discusses the architecture behind a critical piece for Stacks decentralized apps (dApps) developers: the flow of data from the Stacks Node where the block chain is constructed to the WebSockets service provided in the Stacks Blockchain API. dApps can subscribe to data streams via the WebSockets service in the API such as updates to transactions, where the status can change over a period of time.

The WebSockets data flow

The process begins with the Stacks Node. This server, composed primarily using the Rust language, can reside anywhere on the network. The details of the connectivity between the Stacks Node and the Stacks Blockchain API server is for a different discussion. The important thing here is that it sends data to the API server as important events occur, such as when a new block is created. Today it does this via REST HTTP calls.

On the other end you have dApps subscribing to data streams via WebSockets, such as events involving a transaction id. One use case of this is if it needs to confirm that a transaction has made it into an anchor block successfully before taking other actions, such as completing the other side of a cross chain transaction.

Inside the Stacks Blockchain API

This is a Node Server built in Typescript using the express library for its HTTP interface. Various components are bolted onto this server, including the RESTful events server that the Stacks Node currently sends its messages to, as well as the WebSockets server, which is bolted onto the URL path “/extended/v1/ws”.

Incoming messages from the Stacks Node are queued via p-queue (first in, first out) to force it to be handled via a single thread in the order it arrives. Each type of message has a message handler that can do processing to the data before sending it to the data store. For instance, a block can have many transactions in it, which can produce different types of data. The important thing here is that one message — a block — can be broken down to many smaller messages that a dApp may ultimately be interested in.

This is significant value-add for the API over raw data from the Stacks Node, and is the heart of where other things can potentially happen to produce new value. For instance, while today this focuses on breaking data into smaller atomic parts, in the future, a real-time analytics layer can be inserted here leveraging both new incoming data with persistent or cached state.

The events to the WebSockets RPC layer, however, don’t actually come from this event server message handling code directly. This handler code sends it to the DataStore, which in addition to persisting the data via a PostgreSQL DB implementation of this DataStore interface, also emits events that the WebSockets layer can then subscribe to. Today, there are three events WebSockets can listen to: tx_update, address_tx_update and address_balance_update.

You’ll find nearly all the WebSockets handling code in one file, ws-rpc.ts. In there you have a SubscriptionManager that handles client subscriptions using a Typescript Map. Each of our three event types have their own instance of this subscription manager, allowing it to easily obtain the clients subscribing to a particular event. E.g.,

const subscribers = txUpdateSubscriptions.subscriptions.get(txId);

Note that the WebSockets service also queries our DataStore. For Tx updates, it can do a call to db.getMempoolTx(…) as well as db.getTx(…).

Once it has the output, or has an error, it wraps it in a standard JSON format using jsonrpc-lite. It then send this to all WebSockets clients that have subscribed.

subscribers.forEach(client => client.send(rpcNotificationPayload));

Overview

This is a very clean design overall. There are nice separation of concerns. From the WebSockets perspective, it is an event listener that has a large pool of events it can potentially listen to even if it is currently only providing downstream subscription to three topics. The events that it does listen to currently are the output of very helpful data transformations to permit subscriptions on details dApps are likely to be concerned with.

There is plenty of room for improvement as WebSocket client interests grow, including the potential for other transformations that are able to take advantage of the combination of the live data stream with the cached or persistent data.


(You can view the source code for @blockstack/stacks-blockchain-api on Github.)

Posted in Crypto, dApps, Development, Technology, Typescript | Tagged , | Leave a comment

Simplifying WebSockets User Interface (UI) Clients with Happy Pub/Sub Patterns

A back-end WebSocket for live streaming data can typically offer subscriptions to data. Your client UI will typically have one connection to your back-end for all your subscriptions. The challenge becomes how to you handle many components in your UI subscribing to it considering that

  • which components are available can be user driven
  • the components are not aware of each other
  • each component can vary what it subscribes to based on user interactions
UI client with multiple components using WebSocket connection

The solution to this is to centralize a pub/sub pattern near the client WebSocket connection.

Component Subscription Requirements

There are two types of subscriptions at play. One is subscribing to a topic or channel. The other is subscribing to content or data within that topic or channel.

To give an example, a market data API might provide both Level I and Level II quotes, which have very different data structures. This would be a topic or channel a component might subscribe to. Once it is listening on this channel, it may then subscribe to specific content such as quotes for IBM, TSLA and AMZN.

A component that requires Level I quotes will do two things to begin to receive data

  • Listen to or observe the Level I channel
  • Request a list of symbols (content) for its Level I subscription as they change

It can begin observing the channel upon creation of the component. As a user interacts and adds or removes stock symbols, it will update its list of symbols it is subscribing to.

The component sends its complete current list, not the changes. If it is subscribing to IBM, and the user adds TSLA, it will now send the new updated list of [“IBM”, “TSLA”]. If it sends an empty list, it is basically unsubscribing from the data. However, it can continue to observe the channel until it is destroyed.

As cleanup, when a component is destroyed, it should unsubscribe from the content (stock symbols) as well end its channel observation.

Subscription and Publishing Services

To handle these component data subscription requirements, we’ll add two services to our UI client.

Stream Request and Publisher Services

A service in the UI is an injectable singleton that handles shared state around a concern. In our case, we have one focused on content subscriptions (e.g., “IBM”), and another on channel subscriptions (e.g., “Level I”) through which the content will be broadcast.

Note that the publishing of data in your Publish Service via a pub/sub topology can happen via various techniques. In our discussion we’ll focus on using multicast (or “hot”) observables such as what is provided in the RxJS library, which has its own topic subscription semantics.

The publishing service is where you’ll have each type of channel you want. In our stock quotes example, you can have one channel for Level I quotes, and another for Level II quotes.

Both of these services are injected in each subscribing UI component and then used by the component to drive the content it receives.

As your Stream Request Service is handling the mediating to the WebSocket, it is the only service connecting to it. It is not a hard rule that this service also handle routing. It is just one way to route it. The important thing is you do have a service centrally routing your messages and connecting them to your publisher, broadcasting them to your topics based on type of content. You have a lot of freedom here on how you route messages. That really is another discussion. The important thing is that by connecting the publishers to your WebSockets through an intermediary, you have created routing ability within your UI.

The cohesion between content subscriptions and routing of incoming messages makes sense as these concepts are already tied to your WebSocket. If you have more than one back-end you are connecting to, you can create individual services for each one doing this, then optionally put them behind a facade of a single service.

Your publisher, however, should not be coupled directly with the WebSocket because it can potentially have multiple sources of data. This part of our design pattern creates loose coupling between data sources and consumers.

One thing you can do is also add a channel to your publisher just for your UI that your components can publish on, instead of just listening on. This provides a way for your components to talk back to any service without having a direct connection to it…. aka “loose coupling”. Because components can also listen to these types of channels, this becomes a means for components to talk to each other without being aware of each other’s existence. This is one of my favorite benefits of observable patterns.

Similarly, you can have a channel that talks back to your back-end service. We are already doing that in a sense by subscribing. But, you can go beyond that by, for instance, being the source of data, such as real-time sensors or crowd sourced information.

Note that while we haven’t discussed it, we have the option of a REST service in this topology. This is because some back-ends will require REST calls to subscribe to data, while others will require messages via your WebSockets pipe. If you are just subscribing to data, there is no real right or wrong answer for which method is ideal. The important thing is that once the subscription is honored, receiving data comes through the WebSocket to provide one consistent pipeline of a single timeline. The client hopefully never has to combine REST and WebSockets to construct the actual state of data being subscribed to. That is, you wouldn’t want to have to combine both REST and WebSockets to construct the complete quote of IBM. Your WebSocket provides a single timeline of streaming events and data so your UI components can focus on consumption and presentation.

Posted in Angular, Development, Technology, Vue, Web | Tagged , | Leave a comment

Learning the Stacks SDK/API – The Demo Client

Stacks has a RESTful API and a library called Stacks.js. Stacks.js is a set of libraries for Typescript clients “which provide everything you need to work with the Stacks blockchain.”

To develop a better understanding, I created a web client, a single page app (SPA), to test out the various capabilities exposed by Stacks.js and the RESTful API hosted in Github:

https://github.com/PUN-app/PUN-client

PUN is built using Vue 3 and Typescript.

The client uses the tools required to create a dApp. This includes a demo of Accounts, Authentication, Gaia, Cyphering and Contracts.

Posted in Crypto, dApps, Technology, Vue, Web | Tagged , | Leave a comment

Mapping Vue 3 Pages to Firebase Hosting

Vue 3 has a concept called pages that let’s you define different HTML pages acting as entry points to your app.

You define these in vue.config.js in the root of your project. Here is a sample:

module.exports = {
  pages: {
    'index': {
      entry: './src/main.ts',
      template: 'public/index.html',
      title: 'Home',
      chunks: [ 'chunk-vendors', 'chunk-common', 'index' ]
    },
    'about': {
      entry: './src/main-about.ts',
      template: 'public/about/index.html',
      title: 'About',
      chunks: [ 'chunk-vendors', 'chunk-common', 'about' ]
    }
  }
}

You’d expect these pages to then be accessed via / and /about URLs. This works in development. However, when you deploy to Firebase Hosting, your /about URL does not work.

The reason is Firebase has no concept of Vue. It’s just an HTTP server looking for a file, and this file doesn’t exist.

If you look in your deploy folder for firebase generated when you do a build, you’ll see it did create an about.html file. All you need to do is map your URL to this file.

Firebase provides configuration via firebase.json, also in your root folder. Here is a sample configuration:

{
  "hosting": {
    "public": "dist",
    "ignore": [
      "firebase.json",
      "**/.*",
      "**/node_modules/**"
    ],
    "rewrites": [
      {
        "source": "about",
        "destination": "/about.html"
      },
      {
        "source": "**",
        "destination": "/index.html"
      }
    ]
  }
}

In the rewrite section, we direct /about to /about.html. We also added another rewrite to direct everything else to the main index.html. This avoids pesky 404 errors.

Posted in Development, Uncategorized, Vue | Tagged , | Leave a comment

Protected: Ohio at Risk

This content is password protected. To view it please enter your password below:

Posted in Health, Personal | Enter your password to view comments.