Attention: Please take a moment to consider our terms and conditions before posting.

Chat GPT

12467

Comments

  • It's nearly two years since this thread started. Have Lifers' views changed? Who is using it more - or less - than expected? What do you think of its developments? Has it become part of your everyday worklife yet?
  • Use it quite a bit for giving me first drafts to consider - however I tend to have to almost completely rework it as it never reads as though a human wrote it and sometimes it throws in completely random balderdash. 
  • My sister says she uses it in a similar way. The AI generates something, she reads it and goes "well those bits are a load of old shit, and I've no idea if that bit is true", and then uses the structure of the thing as a basis for what she actually has to write. It's a way of getting past the initial inertia of getting stuff down, but she doesn't rely on the actual detail of the output, because she's seen too many instances of it churning out incorrect stuff.
  • Well a week is a long time in AI and after a beer this time last week with a mate who is well up on this stuff, and I am liking Claude for things that Chat GPT can do - friendlier interface, good on translation - and Perplexity, a much better alternative to Google for search. You can ask it a question such as “ pros and cons of all -weather tyres vs summer/winter tyre change“ and it will give you a coherent answer in proper usable language, and always shows you its sources. Great if you need to brief yourself before wading into some discussion that requires facts. 

    However I‘m still looking for something that can turn texts in bullet point  format into a PowerPoint presentation. Anyone doing that task regularly, got a tip for that? 
    Yeh.

    Put a power point presentation together using the bullet point texts as headers.
  • I use it quite a lot in my work now, it's taken me from an incompetent coder who would take ages trying to write code to someone who can code quickly. It's rarely right first time with the code but if you can understand what's wrong you can easily get it to tweak it until it spits out code that works perfectly
  • follett said:
    I use it quite a lot in my work now, it's taken me from an incompetent coder who would take ages trying to write code to someone who can code quickly. It's rarely right first time with the code but if you can understand what's wrong you can easily get it to tweak it until it spits out code that works perfectly
    Same for me, the only downside is you're teaching a Large Language Model to replace you in your job - which is coming. 

    As a developer I can actually see the role transitioning to a "AI Expert", or just someone who knows what prompts to use and how to effectively mark it's work before pushing anything live. 
  • edited January 3
    In November I went for a beer with an old colleague and he told me he uses Claude rather than ChatGPT and Perplexity rather than Google for search. Both were excellent recommendations, I find Claude much more user-friendly than ChatGPT, while Perplexity tells you the source sites it used to answer your question. Thanks to Claude my capability to publish blogposts in Czech has been turbocharged, while any task that looks daunting, such as trying to understand how Stansted Airport operates so well with just one runway, he gives a great answer to, and I’m motoring. We have a niece who is becoming an ace physiotherapist and she finally asked us to do the translation of her clinic’s website to English. We used Claude. Impeccable. 15 minutes and all done.  We looked at each other and agreed “we’re not going to tell her we used Claude”🤣 

    Mind you when I get him to edit a blogpost (in English) it isnt perfect. He can brutally edit my meandering off topic, which I need, but the resulting  text often reads like its from a jobbing  American hack  journalist writing for the Des Moines Trumper. Sometimes that’s OK, but often its so bland I go back and inject some more authentic phrases. But in other cases I’ll get him to tidy up grammar and syntax, and am embarassed ( as someone who likes to remind people that “woke” is the past participle of a verb, and not an adjective or adverb) to find that he had some work to do. 

    I’m still lacking the platform that will create professsional looking  PPT slides from text for me. My buddy says I will find it on Canva, but you have to know how to navigate that and so far I havent cracked it. Would love a good recco for that. 
  • Sponsored links:


  • edited January 3
    It's like having a very patient tutor in your pocket. If there's anything I'm learning that I can't understand, it breaks it down into more manageable and understandable chunks.
  • follett said:
    I use it quite a lot in my work now, it's taken me from an incompetent coder who would take ages trying to write code to someone who can code quickly. It's rarely right first time with the code but if you can understand what's wrong you can easily get it to tweak it until it spits out code that works perfectly
    Same for me, the only downside is you're teaching a Large Language Model to replace you in your job - which is coming. 

    As a developer I can actually see the role transitioning to a "AI Expert", or just someone who knows what prompts to use and how to effectively mark it's work before pushing anything live. 
    Agree, although I envisage it being a complete regulatory nightmare for a while yet before any real replacement kicks in. There's definitely a role for humans to work alongside AI in my line of work but it will definitely cut down the amount of human input needed.
  • edited January 3
    I am working for a company whose product is an AI coding software. 

    It’s mad how many companies are throwing themselves at these types of tools, worried that they could be falling behind their competition. Feels a bit keeping up with the Joneses…


    And yes I am using it every day for first drafts of code, email responses, technical write ups etc. I often tweak it heavily to fit my personal writing style but it definitely gets you over the initial hump and it makes my emails especially read much more professional.

    I wouldn’t recommend anyone use it for school work - too many tools out there now that can sniff out “AI generated” work.
  • A good use case I found for Chat GPT / Claude is pasting in your CV and a job description and asking “given the above JD and CV, share questions that I might expect from an upcoming job interview”.

    You can then have a friend / partner run a mock interview for you using those questions that the model came up with.
  • At work we have more or less stopped doing online coding interviews as candidates were clearly using GenAI to come up with answers.

    My wife has had multiple instances where university students are passing GenAI essays as theirs, some with hilarious consequences.

    I tried using it to summarise newspaper articles but gave up, as it was not giving me anything more than what you could surmise from the headline and the subtitle, or a quick scan of the article.

    I've also used it a couple of times for general coding problems, to save me a bit of time versus trying to understand answers from coding sites like stackoverflow. But haven't managed to use it for bug tracking and fixing of existing code, which is where I spend most of my time and mental effort.

    My cousin uses it quite a bit for producing reports for clients.

    So, in summary, a useful tool for certain tasks, but I still think calling it artificial 'intelligence' is a bit of a marketing ruse 🙂
  • Chizz said:
    It's nearly two years since this thread started. Have Lifers' views changed? Who is using it more - or less - than expected? What do you think of its developments? Has it become part of your everyday worklife yet?
    I use it daily and would find it impossible to do my job to the level I currently do without it.

    Absolute game changer.
    Same here … also using Perplexity as its useful to compare results 
  • I do a lot of writing and I find it good for ideas when I’m stuck. Also, if I struggle to word something to my satisfaction, I give ChatGPT a chapter or two so it gets my style then ask how to précis what I’m writing or use some better adjectives etc. Again, I only use for ideas or to help with any blocks.
  • edited January 3
    Off_it said:
    Not the first time Apple’s AI has had problems - they clearly released something half baked in a panic. They’ve fallen behind Microsoft, Meta and Google massively when it comes to AI.
  • Sponsored links:


  • I do a lot of writing and I find it good for ideas when I’m stuck. Also, if I struggle to word something to my satisfaction, I give ChatGPT a chapter or two so it gets my style then ask how to précis what I’m writing or use some better adjectives etc. Again, I only use for ideas or to help with any blocks.
    I’ve used it that way too … can be surprisingly useful. 
  • Apologies if mentioned before, but the podcast Shell Game by Evan Ratliff is rather amusing.

    Using Chat GPT Evan clones his own voice and creates a chatbot which he sets against nuisance callers.  He also gets it to seek on line therapy and creates another to chat with his original one.  
  • Leuth said:
    We are sleepwalking into hell
    Why?
  • stonemuse said:
    Leuth said:
    We are sleepwalking into hell
    Why?

    Firstly, tools like ChatGPT, Perplexity, Bard etc are neither inherently good nor bad; they are shaped by the intent of their creators and the context of their use. Based on my experience, they enhance productivity and democratise access to knowledge.

    However, the risks associated with misuse, bias, and dependency on such systems are real and need vigilance. That’s a challenge for regulators and society.

    Saying that we “sleepwalk” into hell displays a passive, unthinking consideration of this technology, which I find to be an oversimplification. 

    While some individuals or organisations may adopt AI without fully understanding its implications, there is a growing global discourse about its ethical and practical boundaries. Conversations like these are evidence that people are questioning, debating, and seeking to shape AI’s role in society.

    It’s important to acknowledge that, like any powerful innovation, AI has the potential for harm. However, the solution lies not in rejecting it outright, but in taking an active role in guiding its development. Otherwise Luddism beckons. 

    With thoughtful engagement and collective effort, ChatGPT and similar tools can be part of a future that enhances human capabilities rather than diminishes them. 

    To see it as a march towards hell, rather than an opportunity to shape progress responsibly, risks ceding control of the narrative to fear rather than informed dialogue.

    As I guess is obvious, I am a big fan.  

  • Apologies if mentioned before, but the podcast Shell Game by Evan Ratliff is rather amusing.

    Using Chat GPT Evan clones his own voice and creates a chatbot which he sets against nuisance callers.  He also gets it to seek on line therapy and creates another to chat with his original one.  
    Didn't hear the whole thing as I didn't want to subscribe, but the segment I could get reminded me a bit of the old AI conversation system 'Eliza' from the 1960s, which would mostly give you either a slightly modified version of what you said, or some generic follow up.

    It's on the web. Here's an example interaction I had (* is me, > is Eliza):


  • I haven't taken much notice of this up to now, but with a lot of discussion in the media on the reduction in fact checking on some social media platforms, I wondered how this will impact AI. 

    I understand that these AI systems gather some of their info from social media, and with so much misinformation on those platforms, how will we be able to rely on the information that comes out of AI?
  • edited January 12
    stonemuse said:
    stonemuse said:
    Leuth said:
    We are sleepwalking into hell
    Why?

    Firstly, tools like ChatGPT, Perplexity, Bard etc are neither inherently good nor bad; they are shaped by the intent of their creators and the context of their use. Based on my experience, they enhance productivity and democratise access to knowledge.

    However, the risks associated with misuse, bias, and dependency on such systems are real and need vigilance. That’s a challenge for regulators and society.

    Saying that we “sleepwalk” into hell displays a passive, unthinking consideration of this technology, which I find to be an oversimplification. 

    While some individuals or organisations may adopt AI without fully understanding its implications, there is a growing global discourse about its ethical and practical boundaries. Conversations like these are evidence that people are questioning, debating, and seeking to shape AI’s role in society.

    It’s important to acknowledge that, like any powerful innovation, AI has the potential for harm. However, the solution lies not in rejecting it outright, but in taking an active role in guiding its development. Otherwise Luddism beckons. 

    With thoughtful engagement and collective effort, ChatGPT and similar tools can be part of a future that enhances human capabilities rather than diminishes them. 

    To see it as a march towards hell, rather than an opportunity to shape progress responsibly, risks ceding control of the narrative to fear rather than informed dialogue.

    As I guess is obvious, I am a big fan.  

    The serial comma gives it away ;)
Sign In or Register to comment.

Roland Out Forever!