Stark Drones wants to practice collecting more intensive data as research and launches progress. Data seen here is "as-is" and mostly been anonymized. As funding and research allows, we want to do more studies for both our algorithms and hardware launches.
Neutrino Oscillations
Mining Stress Test
Mining Stress Test
MSP-430 Memory Configuration
name origin length used unused attr fill---------------------- -------- --------- -------- -------- ---- -------- SFR 00000000 00000010 00000000 00000010 RWIX PERIPHERALS_8BIT 00000010 000000f0 00000000 000000f0 RWIX PERIPHERALS_16BIT 00000100 00000100 00000000 00000100 RWIX RAM 00000200 00000080 00000032 0000004e RWIX INFOD 00001000 00000040 00000000 00000040 RWIX INFOC 00001040 00000040 00000000 00000040 RWIX
See full memory configuration mapping --> at the repository
Photon Dev Board Subscription Data
This trace was taken from app data from 6/27/2022 for the same board from the Internet Balloon Launch that was on June 18th, 2022.The data have been scrubbed/anonymized.It gives insight to:- Data/Stream Subscription Capabilities
- Stream logging capabilities
- Bytesize and Post/Data Capabilities
- Logging Capabilities w/ Android and Photon Board
- Data Pinging and Sync Capabilities
592000,quic=":443"; ma=2592000; v="46,43"
Cache-Control: private
Content-Encoding: gzip
Content-Type: application/json; charset=UTF-8
Date: Mon, 27 Jun 2022 19:48:31 GMT
ETag: etag-1070197449556-fireperf-fetch-1769032129
You can also use LogCat or a network visualizer to look at trace history. These bytes were transferred within a short time period. The most complex way to visualize it, is perhaps utilizing a network visualizer toolkit that can do so with nano and millisecond trace precision.
Decentralized Internet SDK Usage Stats
Knowledge Preservation on the Blockchain
Previously, archived via Arweave
Realtime Data Analysis and Load Testing w/ the Cloud
We have ran a test utilizing NextFlow and Seqera on the CHIPseq algorithmic template and custom configurations with a single core max CPU and 6GB max memory in the specs, connected to Google Batch through an API. These are the results. We can potentially analyze 500TB of memory for only $31.38 if this setup is scalable. This is related to cloud/edge computing, and likely an algorithm more tailored towards actual targeting and the data types analyzed, can probably yield even more valuable results.