Skip to main content

Yodeller

Welcome to the Yodeller: my ongoing project to try and write something every day. You can read more about the background from here. If you are a new reader you might want to start from the beginning.

Internal A.I.

Over the past few weeks I have been building a new service for our internal use within company to utilize different A.I. models for various tasks. It's amazing to see how far the A.I. tools have evolved in such a short time. Not just the models themselves, but the tools and frameworks utilizing them and building services on top of them.

We have been using various tools for different tasks, but there are several problems with that approach. First is the cost. Those services are usually priced per user so it's a lot of money to get even one for use for all employees.

The second problem just multiplies the first one. The tools, beyond the really generic base "chat" services are limited to certain tasks. So just one isn't enough, you need to get all kind of different services to cover all aspects of our work.

Data privacy and confidentiality is also a big challenge. We work with a lot of confidential data that we can't just send to services where we can't be certain the data won't leak or be used to train the underlying model further.

Don't want to get political

Results are in and it seems we are going to see some interesting times ahead. It's hard to comprehend how we ended up in this situation. From outside it looks like this should never have been even possible. But somehow here we are.

I don't care much about politics, and even less so when it's not something closer to my daily life. But in this case this might affect way more people than anything from the past almost hundred years.

It's a bit scary to even think about the implications. Only time will tell where this all is leading. But it is what it is. We just need to hope for the best.

Throwing the phone on the floor

I usually take good care of my phone, and any other delicate stuff for that matter. But over the past few days I have managed to drop my phone not on e, but twice! I'm pretty sure this has been the first (and the second) time I've ever dropped my current phone. I don't even remember when I would have dropped any of my previous one either.

Good thing is that the phone is starting to be at the end of it's life. Not that I would want to change it, but the fact that it's already past it's vendor support is soon forcing me to get a new one to ensure I can keep up with the security updates. Another food thing is neither of those drops caused any damage to the phone.

Maybe I'm just subconsciously already acknowledged it needs to be replaced soon and relaxed my safeguards keeping it safe.

Sorting strategies

Organizing my LEGO collection has given me a good chance to reflect how our memory and vision (among other things) work. Trying to find a certain piece among the lot is a good exercise to observe these functions. By better understand them helps to improve the overall efficiency of the process.

The most effective way finding certain pieces would be to just keep everything in one pile and go through it every time. The problem is we can only keep a certain small number of things in our mind, in our working memory. I can keep about 10-12 things in my mind. That's not nearly enough as usually there are hundreds of pieces that need to be found. And keeping all those pieces in mind isn't enough.

Another limitation is our vision, or actually perception. I can see all those pieces in front of me, but if I'm not actively looking for something it's all just meaningless information our minds just filters out. To find anything I need to be looking for certain things. And that requires even more "processing power" than just keeping things in mind. Usually I can look up maybe 3-4 things at the same time.

Baby steps towards singularity

In some sense we have already reached the singularity (or more precisely the A.I. has reached it). By some definitions the technological singularity happens when an artificial intelligence starts autonomously improve itself.

In a way thr already happens when the developers use A.I. coding tools to create better models. Of course that's not strictly autonomous (unless if the machine is playing us and using us to improve itself this way).

This week GitHub released Spark - their A.I. powered app generator. It can do simple apps from natural language description given to it. It's intended for small, simple apps at this point, but they don't really know the real capabilities of the generator. So in theory it could be hooked to itself and told to keep writing a better version of itself.

It might not be able to do that just yet, but neither did we humans be capable of doing what we can achieve today just a few years ago. And our learning capability is somewhat limited.

In any case, if this happens, it won't probably be the superintelligence we have been waiting for (or feared). It would just be a simple machine tasked to improve itself indefinitely. Of course it could end up doing so at the expense of using up all the energy available in the universe consuming everything else in the process.

Subscribe