This small update brings support for ZIP 2.0 via the pyzipper module.
strings_V0_0_10.zip (http)MD5: F98C9D646A83322BC9226673D79FFE2D
SHA256: 7C062616C95DE5DDF0792A8CE9CA0CCA14FF43A8786DCED043193B729361BB59
This small update brings support for ZIP 2.0 via the pyzipper module.
strings_V0_0_10.zip (http)This is a post for version updates 0.0.8 and 0.0.9.
Added command officeprotection and option -j for pretty.
xmldump_V0_0_9.zip (http)This small update brings support for ZIP 2.0 via the pyzipper module and fixes a /ObjStm parsing bug.
pdf-parser_V0_7_10.zip (http)This small update brings support for ZIP 2.0 via the pyzipper module.
pdfid_v0_2_9.zip (http)A friend asked me if I had used a GPU for my tests described in blog post “Quickpost: The Electric Energy Consumption Of LLMs”. Because he had tried running an LLM on a machine without GPU, and it was too slow.
I did a quick test, just redoing previous test but without GPU (by setting environment variable CUDA_VISIBLE_DEVICES=-1).
Answering my query took 17 seconds, and required 1.13 Wh (again, for the whole PC).


I’ve read claims that AI queries require a lot of energy. Today I heard another claim on the Nerdland Podcast (a popular science podcast here in Belgium): “letting ChatGPT write an email of 100 words requires 70 Wh” (if you’re interested, that’s said at 00:28:05 in this episode).
I though to myself: that’s a lot of energy. 70 Wh is 252,000 Ws (70 W * 3600 s). Assume that it takes 10 seconds to write that email, then it requires 25,200 W of power, or 25 kW. That’s way more than the theoretical maximum I can get here at home from the power grid (9 kW).
So I decided to do some quick & dirty tests with my desktop computer and my powermeter.
First test: measure everything.
Step 1: starting up my desktop computer (connected to my powermeter) and waiting for the different services to startup, required 2.67 Wh of electrical energy:

Step 2: I opened a command prompt, started Ollama, typed a query to generate an email, and waited for the result. By then, the required electrical energy op my desktop computer (since starting up) was 3.84 Wh:

So step 2 took 57 seconds (00:02:36 minus 00:01:39) and required 1.17 Wh (3.84 – 2.67). That’s way less than 70 Wh.

Second test: just measure the query.
I restarted my computer and started Ollama. Then I started my powermeter and pasted my query and waited for the answer:

That took 3 seconds and required 0.236 Wh:

Notice that I have not just measured the electrical energy consumption of Ollama processing my query, but I measured the total electrical energy consumption of my desktop computer while Ollama was processing my query.
0.236 Wh for a computer running Ollama and processing a query is very different than 70 Wh for ChatGPT processing a query. That’s almost 300 times more, so even though my test here is just anecdotal and I’m using another LLM than ChatGPT, I will assume that 70 Wh is a gross overestimation.
FYI: asking Google “what is the electrical energy consumption of chatgpt processing a query”, I find results mentioning between 1 and 10 Wh. That’s closer to my tests than the 70 Wh claim.