From f031a61c96df23fcb40ce848ceb773c41a8957c5 Mon Sep 17 00:00:00 2001 From: Naum Date: Thu, 20 Feb 2025 21:22:47 +0100 Subject: [PATCH 1/4] Fix typos in README.md --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index b0c77724c8..19bd5e2399 100755 --- a/README.md +++ b/README.md @@ -18,9 +18,9 @@ gecko: abstracted bitcoin compatible blockchains that run via basilisk lite mode basilisk: abstracted crypto transactions layer, which has a reference implementation for bitcoin protocol via the iguana nodes, but can be expanded to support any coin protocol that can support the required functions. Since it works with bitcoin protocol, any 2.0 coin with at least bitcoin level functionality should be able to create a basilisk interface. -iguana: most efficient bitcoin core implementation that can simultaneously be full peers for multiple bitcoin blockchains. Special support being added to virtualize blockchains so all can share the same peers. The iguana peers identify as a supernet node, regardless of which coin, so by having nodes that support multiple coins, supernet peers are propagated across all coins. non-iguana peers wont get any non-standard packets so it is interoperable with all the existing bitcoin and bitcoin clone networks +iguana: most efficient bitcoin core implementation that can simultaneously be full peers for multiple bitcoin blockchains. Special support being added to virtualize blockchains so all can share the same peers. The iguana peers identify as a supernet node, regardless of which coin, so by having nodes that support multiple coins, supernet peers are propagated across all coins. non-iguana peers won't get any non-standard packets so it is interoperable with all the existing bitcoin and bitcoin clone networks -komodo: this is the top secret project I cant talk about publicly yet +komodo: this is the top secret project I can't talk about publicly yet > # TL;DR > @@ -145,8 +145,8 @@ Loretta:/Users/volker/SuperNET/includes # ln -s ../osx/libsecp256k1 . 3.) I had to change ulimit During the syncing, I have many, many messages like this: >> ->> cant create.(tmp/BTC/252000/.tmpmarker) errno.24 Too many open files ->> cant create.(tmp/BTC/18000/.tmpmarker) errno.24 Too many open files +>> can't create.(tmp/BTC/252000/.tmpmarker) errno.24 Too many open files +>> can't create.(tmp/BTC/18000/.tmpmarker) errno.24 Too many open files >> Loretta:/Users/volker/SuperNET # ulimit -n 100000 From 54205effd45e23c1757bdbad737a58a860d971bb Mon Sep 17 00:00:00 2001 From: Naum Date: Thu, 20 Feb 2025 21:22:49 +0100 Subject: [PATCH 2/4] Fix typos in OSlibs/ios/iOS_Readme.md --- OSlibs/ios/iOS_Readme.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/OSlibs/ios/iOS_Readme.md b/OSlibs/ios/iOS_Readme.md index f28554a200..626377c532 100755 --- a/OSlibs/ios/iOS_Readme.md +++ b/OSlibs/ios/iOS_Readme.md @@ -6,18 +6,18 @@ ## Compile iguana for iOS -- Get SuperNET repository clonned on your machine with command +- Get SuperNET repository cloned, clowned, conned on your machine with command `git clone https://github.com/jl777/SuperNET` -- Change your directory to the clonned SuperNET and execute the following commands: +- Change your directory to the cloned, clowned, conned SuperNET and execute the following commands: `./m_onetime m_ios` `./m_ios` -- You'll find `libcrypto777.a` and `iguana` for iOS in agents directory inside SuperNET repo clonned dir. -- To check if the files are for iOS platform, you can execute the folowing command which will show a result something like this: +- You'll find `libcrypto777.a` and `iguana` for iOS in agents directory inside SuperNET repo cloned, clowned, conned dir. +- To check if the files are for iOS platform, you can execute the following command which will show a result something like this: `cd agents` From bddd8a2cc201538600edc7988e4a0fa370a5fbcf Mon Sep 17 00:00:00 2001 From: Naum Date: Thu, 20 Feb 2025 21:22:50 +0100 Subject: [PATCH 3/4] Fix typos in OSlibs/android/Android_Readme.md --- OSlibs/android/Android_Readme.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/OSlibs/android/Android_Readme.md b/OSlibs/android/Android_Readme.md index 016ee232b4..2cfedc3a2c 100755 --- a/OSlibs/android/Android_Readme.md +++ b/OSlibs/android/Android_Readme.md @@ -26,7 +26,7 @@ source set_android_env.sh `echo $AR` -- If getting output your Android NDK developement environment is set temporarily in terminal window in which you executed the set_android_env.sh script. +- If getting output your Android NDK development environment is set temporarily in terminal window in which you executed the set_android_env.sh script. ## Compile iguana for android From 6ca0666c49e9d31f086bf5580eda0cc55cad3bce Mon Sep 17 00:00:00 2001 From: Naum Date: Thu, 20 Feb 2025 21:22:52 +0100 Subject: [PATCH 4/4] Fix typos in iguana/Readme.md --- iguana/Readme.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/iguana/Readme.md b/iguana/Readme.md index db6a82af25..32b8467d74 100755 --- a/iguana/Readme.md +++ b/iguana/Readme.md @@ -124,7 +124,7 @@ The following are the second pass data structures that are created from a batch I tried quite a few variations before settling on this. Earlier versions combined everything into a single dataset, which is good for making searches via hashtable really fast, but with the ever growing size of the blockchain not very scalable. The maximum size of 2000 blocks is 2GB right now and at that size there is no danger of overflowing any 32bit offset, but for the most part, the 32bit indexes are of the item, so it can represent much larger than 4GB. -iguana doesnt use any DB as that is what causes most of the bottlenecks and since the data doesnt change (after 20 blocks), a DB is just overkill. Using the memory mapped file approach, it takes no time to initialize the data structures, but certain operations take linear time relative to the number of bundles. Achieving this performance requires constant time performance for all operations within a bundle. Since most bundles will not have the hash that is being searched for, I used a bloom filter to quickly determine which bundles need to be searched deeper. For the deeper searches, there is a open hashtable that always has good performance as it is sized so it is one third empty. Since the total number of items is known and never changes, both the bloom filters and hashtable never change after initial creation. +iguana doesn't, does not use any DB as that is what causes most of the bottlenecks and since the data doesn't, does not change (after 20 blocks), a DB is just overkill. Using the memory mapped file approach, it takes no time to initialize the data structures, but certain operations take linear time relative to the number of bundles. Achieving this performance requires constant time performance for all operations within a bundle. Since most bundles will not have the hash that is being searched for, I used a bloom filter to quickly determine which bundles need to be searched deeper. For the deeper searches, there is a open hashtable that always has good performance as it is sized so it is one third empty. Since the total number of items is known and never changes, both the bloom filters and hashtable never change after initial creation. What this means is that on initialization, you memory map the 200 bundles and in the time it takes to do that (less than 1sec), you are ready to query the dataset. Operations like adding a privkey takes a few milliseconds, since all the addresses are already indexed, but caching all the transactions for an address is probably not even necessary for a single user wallet use case. However for dealing with thousands of addresses, it would make sense to cache the lists of transactions to save the few milliseconds per address. @@ -140,7 +140,7 @@ I had to make the signatures from the vinscripts purgeable as I dont seem much u It is necessary to used an upfront memory allocation as doing hundreds of millions of malloc/free is a good way to slow things down, especially when there are many threads. Using the onetime allocation, cleanup is guaranteed to not leave any stragglers as a single free releases all memory. After all the blocks in the bundle are processed, there will be a gap between the end of the forward growing data called Kspace and the reverse growing stack for the sigs, so before saving to disk, the sigs are moved to remove the gap. At this point it becomes clear why it had to be a reverse growing stack. I dont want to have to make another pass through the data after moving the signatures and by using negative offsets relative to the top of the stack, there is no need to change any of the offsets used for the signatures. -Most of the unspents use standard scripts so usually the script offset is zero. However this doesnt take up much room at all as all this data is destined to be put into a compressed filesystem, like squashfs, which cuts the size in about half. Not sure what the compressed size will be with the final iteration, but last time with most of the data it was around 12GB, so I think it will end up around 15GB compressed and 25GB uncompressed. +Most of the unspents use standard scripts so usually the script offset is zero. However this doesn't, does not take up much room at all as all this data is destined to be put into a compressed filesystem, like squashfs, which cuts the size in about half. Not sure what the compressed size will be with the final iteration, but last time with most of the data it was around 12GB, so I think it will end up around 15GB compressed and 25GB uncompressed. Each bundle file will have the following order: [ ][nonstandard scripts and other data] ... gap ... [signatures]