Here’s some context for the question. When image generating AIs became available, I tried them out and found that the results were often quite uncanny or even straight up horrible. I ended up seeing my fair share of twisted fingers, scary faces and mutated abominations of all kinds.
Some of those pictures made me think that since the AI really loves to create horror movie material, why not take advantage of this property. I started asking it to make all sorts of nightmare monsters that could have escaped from movies such as The Thing. Oh boy, did it work! I think I’ve found the ideal way to use an image generating AI. Obviously, it can do other stuff too, but with this particular category, the results are perfect nearly every time. Making other types of images usually requires some creative promptcrafting, editing, time and effort. When you ask for a “mutated abomination from Hell”, it’s pretty much guaranteed to work perfectly every time.
What about LLMs though? Have you noticed that LLMs like chatGPT tend to gravitate towards a specific style or genre? Is it longwinded business books with loads of unnecessary repetition or is it pointless self help books that struggle to squeeze even a single good idea in a hundred pages? Is it something even worse? What would be the ideal use for LLMs? What’s the sort of thing where LLMs perform exceptionally well?
Anything where accuracy does not matter. Writing e.g. sports commentary articles.
Exactly.
LLMs are ideally suited for replacing corporate middle managers everywhere.
deleted by creator
That’s a pretty cool site. Next time Bing fails me, I’ll try this site to see if the results are any better.
With the proper documentation llms are great at helping with code. Take phind which uses GPT-3.5 but with sources. Its great for small code snippets and pulls it’s answers for documentation and stackoverflow
I’ve had free access to github copilot since beta and it’s great, especially when working with unknown libraries or languages. I don’t have to pull out documention and I can go on with the logic. Of course it often hallucinate, the code it spits out need to be checked, but still, it saves a lot of time.
I’ve had some good experiences with asking Bing to write a few lines of VBA or R. Normally, I’ll just ask it solve a specific problem, but then I’ll modify the code to suit my specific needs.
… Eh, no. I’ve seen GPT generate some incredibly unsound C despite being given half a page of text on the problem.
C is already incredibly unsound /hj
I use it to add more dimensions to my D&D sessions. For example: every town now has at least 1 shop that sells t-shirts. I describe the setting the ChatGPT then ask it to come up with 10 shirt ideas, 3 or 4 or which will be pretty good. One of my players has started collecting the shirts.
One time GPT even came up with a shirt design that I could use as a major plot clue. The players missed it, but it would have helped them out quite a bit.
Oh, that’s interesting. You could also ask GPT to generate names and descriptions for places and NPCs according to your specifications. I suppose you might still need to modify these things a bit so that everything works in the story you’re building.
I feel LLM created texts often use rigid structuring along with the fitting linking words and phrases – “on one hand…, on the other hand”, “furthermore”, “in conclusion”. Like a high school student writing an essay. Also the content may or may not be correct and is mostly just stolen from several sources and patched together without any thought and care – also like a high school essay. So I’m gonna go with that.
TL;DR ChatGPT = What to Expect When Expecting
Generating a large amount of utterances to train your cloud service language model for a bot because I’m sure not writing hundreds of utterances all asking the same thing.
Not that this is to do with image generation I’ve always thought translating legal jargon down to a single paragraph in English would be a good purpose for AI. Imagine bullet points of all the major things you’re handing over when clicking “I agree” on a apple terms and agreement?
Or better yet, aks it to summarize all the things that matter to someone who isn’t an app developer, isn’t trying to sue Apple, isn’t trying to hack the software, isn’t trying to build anything on their software, isn’t trying to sell anything or doesn’t even run a business of any kind. There are more than a few paragraphs specifically trying to counter all of these special cases, and they don’t really concern someone who just wants to use the iPhone to call their grandma.
I’ve had some success quietly replacing middle-to-executive management with LLMs.
It’s not perfect, but the quality and coherence of the ideas went up a moderate amount. Obviously a good CEO is a valuable thing, but lacking that, ChatGPT does OK at defining company direction and strategy.
It’s not good enough to replace a half-decent copy writer though.
It’s not good enough to replace a half-decent copy writer though.
You severely underestimate the demand for crappy copy that AI is perfectly able to supply.
I see your point, and I agree there will indeed be a lot of demand. My own strategy is to move against this kind of trend, though. When the competition focuses on SEO, dark patterns, and cheap crud – double down on quality and customer loyalty. When they are over-focusing on quality, then make it cheap, cheerful, and easy to find :P
On the boards I advise (just a few, I’m not that influential), a lot of the use of LLMs has stemmed from (frankly) lazy executives not doing their job (their jobs are mostly judgement and delegation – this is a failure of both). Quality control balked at what they suggested publishing (it was really nowhere good enough, and off-brand). There’s this lesson I hold to heart, that once something stops doing the things that give it identity, it begins to fragment and fall apart. Whether its Greece, Rome, one of several Chinese dynasties, a company (Radio Shack! Sears!)… or all those executives and managers in retirement when there’s no more decisions to make or people to manage :D
So yeah there’s going to be a big demand for it, but that’s exactly why I consider our copy writers and designers more valuable now. It’s an opportunity for them to shine. Should be easy to retain them (or hire more) in the coming market too – and for the current executive, what a missed leadership opportunity! I’m not blameless either – my job is to persuade them, and I haven’t succeeded.
Anyway, that’s a little slice of my life, which I hope you found entertaining.
Don’t get me wrong though – I do love LLMs and also image diffusion models. I’m really excited by their future, especially for coding and high-level planning and reasoning! They’re not that good at these things yet, but I think it’s going to happen. I could make so many excellent things to share with the world – e.g. even if they just help me reliably debug faster, or if it codes and I write the unit tests by hand!
Funnily enough: revealing plagiarism. Or even just judging the originality of a given text. Train it to assign an “originality value” between 0 (I’ve seen this exact wording before) and 1 (this whole text is new to me) to help universities, scientific journals or even just high schools judge the amount of novelty a proposed publication really provides.
Recently I’ve seen some discussion surrounding this. Apparently, this method also gives lots of false positives, but at least it should be able to help teachers narrow it down which papers may require further investigation.
Recent studies show it doesn’t work at all, and has likely caused irreparable harm to people whose academics have been judged by all of the services out there. It has finally been admitted that it didn’t work and likely won’t work.
Well yeah that approach would work if you train it on one model but that doesn’t mean it would work on another model, but for the normal user who uses chatgpt it is probably enough to detect it at least 80-90% of the times