just me that’s more excited about local #ai models?

google just posted about gemma 3n a nano model which they claim runs on 2GB of RAM (although god knows what gpu they used)

offline, fast(?), no limits, privacy friendly

seems cool – i am already impressed by running deepseek on my iphone atm