Skip to main content

Bring your TMS watchlist to life with Watchlist LIVE. This browser extension supercharges your NEPSE TMS by seamlessly auto-updating your TMS watchlist (under Market Data ► Watch List) with near real-time price and volume data.

Everything happens right in your browser, with no data collection or transmission. It simply uses what's provided by the TMS, processing everything locally on your machine.

Say goodbye to the hassle of endless browser refreshes and switching between multiple tabs and third-party providers! With Watchlist LIVE, you can cut through market noise and focus on the stocks that matter most to you without leaving your TMS.

Try Watchlist LIVE now.

Handling large JSONs in Golang can be tedious. The most common approach is to unmarshall the JSON into structs and then access the required fields. But this method quickly becomes cumbersome when dealing with massive JSONs with highly nested fields, especially when only a fraction of the JSON is needed. Libraries like GJSON offers solution for this. However, it still requires defining string paths to navigate through a JSON data, which causes Golang to lose type information and reduces IDE's effectiveness.

A technique I found convenient is to create a wrapper struct around GJSON's gjson.Result. This allows the struct to "inherit" GJSON's functionality and be extended. For instance, consider the following (Gemini generated) JSON structure:

{
"PokemonCatalog": {
"PokemonList": [
{
"PokeId": ...,
"Name": ...,
"Type": [...],
"Height": ...,
"Weight": ...,
"RandomEvolves": ...,
"WeirdColor": ...,
"Moves": [
{
"MoveName": ...,
"MoveType": ...,
"Power": ...,
"Accuracy": ...,
"RandomMoveProperty": ...,
"MoveEffects": {
"StatusEffect": ...,
"DamageType": ...,
"RandomEffect": ...,
"SubEffect": {
"SubEffectDetail": ...,
"SubEffectValue": ...
}
}
},
],
"RandomPokemonFact": ...,
"RandomPokemonNoise": ...
}
]
}
}

(It is lightly nested and probably is straightforward to just go through the unmarshall-to-struct route however...) In order to read the Pokémon in the PokemonList, the first step is to read the root PokemonCatalog structure:

package pokemon

import (
"github.com/tidwall/gjson"
"pokemonjson/utils"
)

type PokemonCatalog struct {
gjson.Result
}

func NewPokemonCatalog(content gjson.Result) *PokemonCatalog {
catalog := content.Get("PokemonCatalog")
return &PokemonCatalog{
Result: catalog,
}
}

Now, to read the PokemonList array, in the above struct, simply create a method that reads the PokemonList and converts each item into a Pokemon struct:

package pokemon

func (p *PokemonCatalog) GetPokemons() []*Pokemon {
pokemonList := p.Get("PokemonList").Array()
return utils.Map(pokemonList, func(pokemon gjson.Result) *Pokemon {
return NewPokemon(pokemon)
})
}

Similar to PokemonCatalog struct, the Pokemon struct is also a wrapper around gjson.Result:

package pokemon

type Pokemon struct {
gjson.Result
}

func NewPokemon(content gjson.Result) *Pokemon {
return &Pokemon{content}
}

It contains methods to get attributes of the individual items in PokemonList:

package pokemon

// ...

func (p *Pokemon) GetName() string {
return p.Get("Name").String()
}

func (p *Pokemon) GetId() string {
return p.Get("PokeId").String()
}

func (p *Pokemon) GetType() []string {
return utils.Map(p.Get("Type").Array(), func(t gjson.Result) string {
return t.String()
})
}

func (p *Pokemon) GetHeight() float64 {
return p.Get("Height").Float()
}

func (p *Pokemon) GetWeight() float64 {
return p.Get("Weight").Float()
}

func (p *Pokemon) GetMoves() []*Move {
return utils.Map(p.Get("Moves").Array(), func(t gjson.Result) *Move {
return NewMove(t)
})
}

You can even go a step further and utilize the power of GJSON Syntax in the methods.

package pokemon

//...

func (p *Pokemon) GetNativeMoves() []*Move {
pokemonTypes := p.GetType() // Get the pokemon types
result := make([]*Move, 0)
for _, pType := range pokemonTypes {
// Find all the moves with the same type as the pokemon
// The surrounding "#" in #(MoveType==%s)# means it will return all matches
// ...whereas a single "#" i.e. #(MoveType==%s) returns the first match
moves := p.Get(fmt.Sprintf("Moves.#(MoveType==%s)#", pType))
result = append(result, utils.Map(moves.Array(), func(t gjson.Result) *Move {
return NewMove(t)
})...)
}
return result
}

The best part is—it eliminates the need to create structs for unnecessary intermediate JSON objects:

package pokemon

func (p *Move) GetDamageType() string {
// don't need to create MoveEffects struct just to get the DamageType
return p.Get("MoveEffects.DamageType").String()
}

And finally, to query JSON structure:

package main

func main() {
content, err := os.ReadFile("response.json")
if err != nil {
panic(err)
}

response := gjson.ParseBytes(content)
// Read the "PokemonCatalog"
catalog := pokemon.NewPokemonCatalog(response)
// Get the "PokemonList" inside the "PokemonCatalog"
pokemons := catalog.GetPokemons()
for _, p := range pokemons {
// Now access the attributes of each pokemon
fmt.Println(fmt.Sprintf("Name: %v", p.GetName()))
fmt.Println(fmt.Sprintf("Type: %v", p.GetType()))
fmt.Println(fmt.Sprintf("Height: %v", p.GetHeight()))
fmt.Println(fmt.Sprintf("Weight: %v", p.GetWeight()))
for i, move := range p.GetMoves() {
fmt.Println(fmt.Sprintf("Move %v", i+1))
fmt.Println(fmt.Sprintf(" Name: %v", move.GetName()))
fmt.Println(fmt.Sprintf(" Type: %v", move.GetType()))
fmt.Println(fmt.Sprintf(" Accuracy: %v", move.GetAccuracy()))
fmt.Println(fmt.Sprintf(" Damage: %v", move.GetDamageType()))
}
for i, move := range p.GetNativeMoves() {
fmt.Println(fmt.Sprintf("Native Move %v", i+1))
fmt.Println(fmt.Sprintf(" Name: %v", move.GetName()))
fmt.Println(fmt.Sprintf(" Type: %v", move.GetType()))
fmt.Println(fmt.Sprintf(" Accuracy: %v", move.GetAccuracy()))
fmt.Println(fmt.Sprintf(" Damage: %v", move.GetDamageType()))
}
fmt.Println("------")
}
}

The code is available on GitHub.

At any moment, billions of people in the world are experiencing their own unique version of life. Some lead a far simpler lives while others could be living something unimaginably sophisticated than yours. Some could be living their best moments, while others—enduring their worst nightmares. I sometimes wonder what their lives are like—the people you see at the gym, the strangers across the street whom you'll never see again, those extras in a music video whose names don't even make it into the end credits. While randomly surfing the internet, I came to know that this realization has a name: sonder.

If the universe were a simulation, I wonder how it manages all of these billions of lives running simultaneously. Does it do what games do to save compute power? Does it not render the "objects" that are outside your field of vision? If I am all alone in my bedroom, are my parents still there? Are you still there?

Back to the topic—I often ask people who have seen more of life than I have, and whom I consider to be wise, what advice they would give to their younger selves. It is always fascinating to hear their answers, but one piece of advice that stuck was goes something like:

Just be nice. You don't know what the others are going through, so be kind.

Escape Analysis is an optimization technique whereby a compiler identifies whether a variable, created inside a function, escapes the scope of that function. If the analysis confirms that the variable doesn't escape, then it can be more efficiently allocated on the stack rather than the heap.

For example:

def build_obj
obj = Obj.new()
return obj # variable escapes this function
end

In above snippet, the variable obj escapes the scope of build_obj function since it is returned by the function. So, the obj, quite possibly, is used or can be used by the caller.

caution

This article uses Ruby code only to illustrate the concepts. Whether the snippets undergo JIT compilation (or not) is not the point that this article aims to address.

In another scenario:

def do_something
obj = Obj.new()
print(obj.id)
# obj does not escape
end

The obj is instantiated and then used within the function scope. Most importantly, the instantiated variable doesn't escape.

So, by keeping this technique in mind, what practical optimization can we achieve?

ORMs + Escape Analysis

ORMs abstract away the SQLs from the developers, but it can generate queries that are not always optimal for the use case.

In Active Record pattern, a simple fetch by ID query looks like:

def fetch_document(id)
return Document.find(id)
end

This, most likely, generates and executes a SELECT * query to fetch a row by id and then does some black magic underneath to map the result to the corresponding attributes in the Document object. The query executed is:

SELECT * FROM documents WHERE (documents.id = 1) LIMIT 1;

Retrieving all the fields is justifiable in case of the fetch_document since the Document object escapes the function, and we do not know how the caller will use the returned object. This approach ensures all the fields are available in case if some caller needs them.

In another scenario:

def send_docment(id)
document = Document.find(id)
return post(document.id)
end

The send_document still executes the same broad query as above and then utilizes only the id field from the result. However, in this case, the document doesn't escape the function, which makes it possible for us to fine-tune the query without breaking its callers. So, a more optimal approach would be:

def send_document(id)
document = Document.select(:id).find(id)
return post(document.id)
end

This utilizes a SELECT documents.id FROM documents WHERE documents.id=1 LIMIT 1 query, specifically fetching only the required id field. This is an optimal query for the send_document use case.

Why SELECT * when SELECT x do trick. - Kevin Malone, probably

Where else?

  • GraphQL queries
  • Functions in strongly typed language where the function's contract is already constrained
    export function getPostById(id: number): { title: string, body: string } {
    - const post = client.query(`SELECT * FROM posts p WHERE p.id = ${id} LIMIT 1`)
    - return { title: post.title, body: post.body };
    + const { title, body } = client.query(`SELECT p.title AS title, p.body AS body FROM posts p WHERE p.id = ${id} LIMIT 1`)
    + return { title, body }
    }

#ReadInPublic is my attempt at sharing everything that I've found insightful–with the world.

This is a collection of all the blogs/articles, that I learnt something from, organized by month and presented as a heatmap. A much easy-on-the-eyes representation of the Knowledge Graph. If you're interested on learning what this is about, here's the origin story :D.

You can even treat it as a newsletter. It gets updated with my Pocket list on 1st and 16th of every month, so do remember to visit here again.

As a long-time user of Pocket, I honor all the insightful and fascinating blogs that I come across on the internet by tagging and adding them to my Pocket list.

And every week, I go through a lot of articles on any topic that I find interesting. Today, I've accumulated over 600 700 of those, and I'm making that Pocket list available for the whole world to see–in the form of a "Knowledge Graph". So, the network graph that holds all the wonderful reads I've ever found on the internet is now live!

Go to Knowledge Graph

What's new

Thanks to the HN community, I was able to receive feedback and suggestions. And, I've managed to work on a few of them.

  • Many found the graph too difficult to traverse and preferred to see it as a list or a tree. Well, now you can! Simply clicking on the tag nodes (the larger ones) will bring you a list all the articles associated with that tag.

  • It was brought up that the data in the knowledge graph wasn't searchable through the Algolia search bar. Interesting story: the website is (well, was...) actually indexed by Algolia Crawler and the knowledge graph being the unusual page in the website, it wasn't crawl-able. So, I retired the Crawler and manually configured the Algolia index through the pipelines. And as a result, you can now use Ctrl+K or +K to search both the tags and the articles in the knowledge graph.

Answers to the FAQs

  • The color of the nodes do not represent anything. But their sizes are important though. The larger nodes are the tags, while the smaller ones are the links to the article.
  • The network graph is built using D3.js. Everything you see in the screen in /knowledge-graph path, minus the top navigation bar, is rendered using a ton of DOM manipulations and SCSS and Docusaurus's Infima.