Apologies for the delay It's been a busy few weeks
https://www.youtube.com/watch?v=oIEnoDzcXlI
@mcfarhat
Okay, we're recording. If you guys want to start, it's up to you.
@blocktrades
Okay, awesome. Okay. Well, since we think how possibly is not going to make it today, because of the time zone change, this is Dan, our otherwise known as block trades. So I'll start off the meeting discussing what we've been doing lately.
I guess, first of all, on the Hive D front, the basically the blockchain node software, we've been doing a couple of things of interest, I think, to everyone. First, there was a request to increase the transaction expiration time, which previously, basically transaction would only last for like an hour, once it was broadcast.
And this was a problem for people who wanted to do multi-signing, because we often do multi-signing out of band, and you might have to contact some people and wait for them to sign and then sort of pass it around to get all the signatures. So it was requested to increase the transaction expiration time to like a day.
And in theory, that looks pretty easy to do is just a few lines change in the code. But we also wanted to be sure that this didn't cause any kind of problems, because obviously the time it only was set to an hour for some reason, presumably. And so we looked into it, and we did find that there were cases where this could increase in transaction expiration time could allow for someone to attack the network easier by creating a huge amount of transactions that add up space and specifically memory on people's nodes. So just to figure that out, of course, it was a lot of work, because we had to basically set up a test where we basically could literally do that kind of, I guess, flooding of the network, which isn't diseases it sounds even do.
We had to write special versions of HiveD that were capable of generating more transactions and processing more transactions than than HiveDs do right normally right now. But we were already been kind of working on that for some of our other testing for like future performance.
So we basically got all that working, and we're able to see that indeed, basically by flooding, you could increase the amount of memory that was used on a HiveD node by several gigabytes. And so to sort of resolve that problem, we made some changes the way RC calculations are performed under flooding conditions so that the transactions get progressively more expensive temporarily and under a flooding condition, and therefore they'll get dropped by the node so that they don't eat up memory.
And we also set limits of where these kind of triggers happen. This doesn't affect the actual final RC cost when it gets put into the blockchain, but there's like, when you first process a transaction, there's an RC cost that's temporarily, basically, it's only calculated on the node, it's not actually done as far as, oh, okay. Okay, Gandalf said he's having some problems with sound, but I don't think it's impacting any of us. So
@mcfarhat
No, I can't hear you well.
@blocktrades
Yeah, okay. Good. So basically, this is really a change to the RC calculations. It's just a temporary one at the nodes themselves so that they don't eat up too much memory under these flooding conditions. And again, this was very speculative.
I don't know that we would have this problem happen anytime soon, because it took us even a while to be able to generate the conditions, but we want to be safe as possible as we make changes like this. So it's sort of future proof just against flooding attacks in the future.
So I think that's another positive for this change. So this change will happen as part of the next hard fork. I've been planning the next hard fork for December, and that's where it's currently set. But there's one more change I really want to get in. So what I'm thinking to do right now is everything else.
So we've got two sort of separate sets of changes. We've got the hard fork changes, and we've got all the changes related to the API nodes. And the API node changes are things like updates to HiveMind, update to the block explorer, balance tracker, all these half the account history, all these kind of things. And so those are quite separate changes, and they can be deployed separately. So I'm thinking, as of basically today, I started thinking about it.
I still want to keep a December timeframe for upgrading all the nodes. So I think we should do the upgrade to all the nodes in December. Everything looks to be coming together as far as testing goes that will be ready in December for that. But then I want to push out the release, the hard fork date, so that we can make some more changes to HiveD prior to the hard fork.
So I'm thinking first quarter for the hard fork itself, but API node release in December. It shouldn't put any more real trouble as far as node people running nodes, because basically, there won't be any kind of replay once they upgrade to the hard fork version. So basically, everybody will have to do a replay for it in December, and then there shouldn't be anything required in the first quarter to do the upgrade to the hard fork version of HiveD itself.
And the main thing I'm looking at, the reason I'm sort of delaying the HiveD is because I want to make some changes to the signing. There's been some requests for changes, and I also want to make some changes of my own to the signing. And also, as part of that, I also want to release another half app, which can be basically deployed later, which is the Light Accounts app. And that one's a little behind, because the guy who had planned to work on that has been tied up with another project, and he's just finally coming free now.
So I've already made some posts, which kind of cover what we've done lately, but I just want to give a few more overview quick highlights, and also sort of update since that post. So like I said, there's not a lot in the hard fork right now, changes that I think are too caused to many issues or potential concerns. The biggest one is going to be this increased expiration time, and again, that'll be first quarter.
But we've done quite a bit on the half side of things. And so we basically rewrote the loop for how half apps work, and that was done in order to make it more difficult to make an app that had problems. One of the things when we were developing half apps, we noticed it was possible that if you committed, if you made commits to the database at the wrong time, and then your process was interrupted for some reason, your app got, you know, say it crashed or something like that, then it might have a problem being an improper state when it relaunched.
So we redesigned the loop that half uses so that it's very difficult for someone now to write a main loop in their app that will have that kind of problem anymore. And as part of that, we've tested this on all our half apps, the new loop, and it works great. We haven't had any more problems of that sort since that change. Another thing we've done recently is we're currently making tests to switch to Postgres 17 from 16. And so I'm running tests on that right now, and I'm doing benchmarks, and so far the speed looks, it's the same.
So we don't have any problems, as far as I can tell so far. The only thing I still, I'm still replaying Highbind, but once Highbind's replayed, the last thing I'll need to do as far as benchmarking is we'll test it with the production data on the query side to be sure none of the queries themselves are slower. But the replaying, the sync time, all seems to be quite good so far with Postgres 17.
So I don't anticipate any problems with the move to Postgres 17. Another change I'm also testing in half, which I guess is the other big final change I think we would like to get in half before the release, is we've shifted the data that's stored in half into a separate schema called half D.
So basically all the data and all the code, the sort of API, we're all stuck in this one schema called Hive before. And now we separated it into two different schemas. One contains the data and one contains the API. And we did this to make it easier for us to generate upgrades between versions of half so that it would, because it's been sort of troublesome sometimes to do an upgrade between two different versions of half, at least easily. And this should simplify that process of making an upgradeable version.
Let's see, so that's kind of most of what's going on in half. The other, I guess one other project we've been working on for quite a while, which I've mentioned in passing, I guess it's been about two or three months now, is we've been basically rewriting the server side of highmind. It currently uses a combination of SQL and Python code to respond to queries.
And that had a couple of disadvantages. One SQL code just tends to be more efficient on a database server than Python code. And two, it made it difficult for us to use a Postgres T server to serve up the API calls. And for all our other half apps, we're basically using Postgres T servers now instead of Python based servers, because the performance of Postgres T servers is much better.
So we've finally, just as I guess today, finished the conversion of all those apps to pure SQL API calls. So next, we're going to be benchmarking that specifically with Postgres T and seeing, you know, checking the new performance of everything. This was also important for another reason, not just for the performance advantage we'd get, but also because we're also making a move, as a lot of you guys know from the JSON API, JSON based API to REST based API. And so in order to, our preferred way of doing those REST APIs is using Postgres T.
So we really needed to switch the server to Postgres T in order to start the move to using a REST based API for HiveMind itself. We kind of completed the REST API for all the other apps. We've done it for reputation tracker. We've done it for balance tracker. We've done it for half. And we've done it for the block explorer.
But we still haven't done, we still don't have a REST API yet for, for HiveMind. But now we're, we're finally in the position where we can start on that work as well. We're also going to, I guess as part of that do some analysis of all the API calls that we've been working on the existing ones for HiveMind and see if we can make, make some of it a little bit more logical as we make the move to REST. So that's kind of what's going on in HiveMind.
The other apps I mentioned, like I said, the other thing we've really been doing is getting all the REST API calls for those apps that we previously had JSON RPC based calls as well. And so we've, I've tested and I've published that and we've got a server up now where anybody can test the new REST API calls. It's all doc, it's the move to REST is also allowed for us to document the API in a sort of interactive way using swagger.
So now it's really easy if somebody, if a dev wants to come along and see what the API works like, they can basically just go to those pages and actually make interactive calls and just test and see what the calls do. And you know, what happens if you make a change without having to write a lot of code, they can just sit there and tinker within the pages of the swagger, which is really nice, I think.
@mcfarhat
You know, not to interrupt you, Dan, funny enough, one of my new devs. I did not mention swagger. I don't know how they were researching something and they just brought me saying, okay, this is how we call this thing. And I said, how did you come up on swagger? He said, they're just Google things and I found it online. It was, it was amazing. I love this.
@blocktrades
Oh, so he found it, he found it online.
@mcfarhat
Yeah, yeah, yeah, found it online while Google is stuff. Yeah, that was amazing.
@blocktrades
Oh, that's cool. That's quite cool. So yeah, I think that's the switch to swagger is really, I think, going to make a big leap forward for our documentation process for the whole hive API. And it kind of forces everybody to do it too.
And then in the semis, this is the idea is now ever all the documentation will have a kind of standardized format, which I think is, you know, really important, especially as we get more devs working on different projects.
And I guess it's a good time to mention this was also the past past little bit, we've also been slowly transitioning development effort for Block Explorer over to McFarhat's group. We're still doing a little bit work on our side, but we're in the process of finishing up and finalizing the head off of all that code to his team, which I think is great.
It gets another group that's be very familiar with the half development process, and also just allows us to free up to work on some other projects as well.
Let's see what else. There's a, we've been done a bunch of work on the front end side as well. And I've kind of covered that also in my post, so I don't want to get into that much detail.
But I guess I'm trying to think what should really cover there. Maybe actually, before I go to that, I guess the other thing I want to talk about a little bit is the state of Wax. Wax is also getting quite close to release, and that's still time for December release as well. And Wax is basically our new library for using all these APIs, the new REST APIs especially.
And so the most recent thing we're doing in Wax is we've been building a health checker. And in fact, I've actually, if Bartek's able to talk, I might ask him to cover the current state of the health checker, because I haven't had a chance to check with him on the current state of it.
But basically, I'll just sort of describe what it's for first. The health checker is basically a sort of some code inside Wax that allows you to chest the state of the various API servers you're using and select which API servers you're using.
And so it can basically allow you to switch back and forth between the ones that can be the best performance. But I'll let Bartek speak and sort of describe the features more.
@BW
Yes, actually, this is some part of library and let's say a class which allows you to register API end points and some servers which we want to examine with such calls.
It is possible to register using this tool, some custom validators. So a programmer which would like to integrate this tool in its application can write some validators and check that responses received from given servers are matching the expectations.
And this tool is periodically calling registered and defined and requests to specified servers and calculate some score over given endpoint. And next notifies parent application about some changes in scores in best end point, worst, etc. Actually, exact today, we completed another stage and improved internals of this tool and added finally added support also for REST calls.
So both types of calls can be examined by this tool. And one of our guys working still on Block Explorer is trying to write some UI components which integrate this tool and will be first used at Block Explorer site. As I know, works are progressing quite good. Probably tomorrow we could have seen some UI of such help checker component and maybe we will publish some results about it.
Our idea is to share this component to other applications, especially developed also here, for example, Dancer. And maybe actually every application using Hive Calls could use it to verify end points and have some common support for that.
This component also uses Wax support for making API calls in an object call style because Wax allows to define some object structure and then use API calls as a regular object method. So that mostly simplifies using of that. And actually that's most of information about it. Maybe you have some questions.
@blocktrades
Yeah, actually I do have a few other guys made too. So first just correct me if I'm wrong about any of it of anything. As far as I know, you can basically specify a node that you want to talk to for either a set of APIs or even maybe a specific API that calls.
@BW
Yes, it is possible to define a set of APIs for given node and single one also. Then results are collected for given API method and the best node is selected based on set of methods.
@blocktrades
Okay, and so what's the metric it's using for best node? Is it terms of latency? Like how fast?
@BW
Yes, the tool is analyzing response times and actually we're quite limited here because we can only use APIs available in web browser which drops most of the internal support in network communication.
But it was possible to collect some time metrics specific to making calls. And of course also important part of such metric is correctness of given node and actually support for given API method. That part is covered by registered validators which are defined by programmer.
So we can, for example, register, find the account call and verify that specified node, recognize, for example, block trades account or GTG or everything needed and accept and allow to select such node only if it can correctly respond to that.
I hope it is quite nicely designed and easy of use. It uses standard patterns commonly used in front-end development like event emitters and so on. What more?
@blocktrades
I guess I had a question. So it's basically verifying a node is working based on passing a validation. Is that periodically performed and if so, how often?
@BW
Yes, it is periodically performed and probably the calls are made once per few seconds.
@blocktrades
Okay.
@BW
Actually, this is some configuration parameter specified as constant. So we can easily check frequency of such verification. Chests are also made concurrently. So in every method is concurrently called. So whole check of set is not so time consuming.
@blocktrades
Nice. Okay,
@Brian of London (Dr)
I get that. A quick question. Can I jump in with one? Sure. It's Brian. I'm just about to rewrite my back-end Python stuff. How close is wax and these kinds of things being to a module I can just, you know, pip install like a replacement for Beeman for Lighthide. Are we close to a release or is there a pre-release? Is anybody putting it on PyPy yet?
@BW
We are progressing to make Python wax possible to release. Hopefully we, actually, we lately completed some important parts to that and there was developed some module called Bikipy.
And that's Python wrapper over our Bikipy tool, which is required to sign transactions. And it was first part needed to start working on object definition of Python interface in wax. I hope we can start designing this interface, maybe even this week. It depends on other works specific to our tool clive, which uses a lot of our Python resources.
But I'm still very focused on starting that work because I know it is very important for developers. And we will try to prepare some initial version of that, maybe even in this year.
So probably most definitions, most design of this Python object interface will be similar to Type Script version, which was designed quite long and actually tested already by several applications. And I hope Python users patterns can be really close to that users and we can try to design this interface in similar way and avoid reinventing the wheel in this part.
So I hope that's only specific to implementation, not too much time needed for design of this interface.
@Brian of London (Dr)
Thanks for that. If I can help at all, send me a direct message or something. I don't know if my skills are up to it, but because when I looked at wax itself, it kind of isn't in a format that I could figure out how to call it or just make any use of it.
So if I can help, if someone can write a scaffolding, I'll fill bits in as needed. I can maybe put some time into that. So thanks. Okay.