Eliza is a good counselling/mentoring tool. Not a coaching one.
But that has to be factored with availability, opportunity and motivation
Itâs a terrible counselling tool! but itâs amazing what they could do in 1966.
Almost makes me wonder if the moon landings were not faked
Yep, none of which is insurmountable. Particularly as it only has to be good enough, maybe slightly better than the human interactions people have with coaches.
The real trick will be the human interaction, apparently people are simulating relationships with AI to combat loneliness already so perhaps that has already been achieved?
For me, thereâs no non-human interaction as good as a friendly human relationship but the high volume low net worth newbie market could be lapped up. Itâs the high end market that will stay human imho. Which kind of makes sense regardless of the market right?
because?
âDo any reasons come to mind ?â
When AI was first mooted, the scientists assumed that the challenge was replicating how humans see to move in the world, however the real challenge has been to understand how humans orientate themselvesâŚthatâs the complex part and certainly that remains with AI coachingâŚ
yes, it allows people to think âout loudâ and understand what concerns them and why without the biased intervention of a counsellorâŚ
see my post about Adam Curtisâ work elsewhereâŚ
I understand. Go on?
Agree itâs very clever and amazing for the era, but once you have used it a few times you realise itâs just bouncing balls off a wall I think
Whilst that may well be the outcome, it actually works better the other way aroundâŚ
i would suggest that most top athletes are working towards marginal gains (very little technical coaching takes place - much less than should be the case)âŚand whilst some of the athletes at the top end need the occasional hug (current safeguarding stupidity notwithstanding) most are resilient.
itâs the other end of the market that would benefit from more quality coachingâŚ
Thatâs quite interesting.
it encourages the user to create their own strategy for understanding and resolution rather than needing a.n.other to often tell them what they could have worked out themselvesâŚ
Why do you think that is the case?
Tell me more?
Is there anything else?
What would you like to happen?
How might that come about?
Most of the people on this planet go into fix mode in these situations rather than listening modeâŚcounsellors are just better at listening for longerâŚ
For sure, it was an early & groundbreaking attempt at simulating the reflective function of a counsellor.
But a bit of a one trick pony, I feel.
Later, Iâm going to cut up Mr Johnson from number 73 and saute his pancreas with a little basil and rosemary.
Do you enjoy going to cut up mr johnson from number 73 and saute his pancreas with a little basil and rosemary ?
Canât help but feel a decent counsellor might tread a different path here for example
counsellingâŚnot writing up Lecterâs autobiographyâŚ
As I mentioned before, LLMâs are just complex probability algorithms. A machine can simulate well, and maybe in many many years will have enough data points to quite realistically have a conversation but it wonât understand that conversation, it wonât understand subtleties in language, it wonât understand local colloquialisms, it wonât understand feelings and pick up on emotions. It will be like an autistic child who takes everything at literal meaning and formulates a response to that with no emotion. Maybe that is what we need but Im not convinced it works in any form of coaching , sports, formal education, health, mental wellbeing because it will never be a human and will never understand human compassion.
âSean failed his driving test.â This text pattern will in time be learned, a formulated response that it may come up with is âwhat is Seanâ > Sean is my son . The computer is then looking at probabilities of responses based on its training data, not thinking he must be really upset to bring that up, letâs discuss this further. At best it can associate failed and son as creating a negative pattern, and use probability of learned conversations but where is it getting these conversations and patterns from, âBig Dataâ? the GPTâs were trained from texts, books, websites, but not conversation between coach / athlete. How can you train a CoachGPT to act like a coach if its not trained with the millions of athlete/coach conversations that are had weekly and how does it know the difference between a successful conversation and an unsuccessful one, and how would it measure those results.
ML is certainly a realistic alternative, I spent months studying the EdX stuff and, and its not possible to list all the various algoâs used here that they taught, and there are infinite possibilities of more, but even the simplest ML is still down to how well you code the weighting algorithms, how well you set the weighting, how many data points you add to the training data, how you quantify results, how to set boundaries on the data points you use for training, and how you then utilise those results to create a sensible output.
One example I learned (@fruit_thief will remember) was a 2 input model created to predict the results of rain or no rain. it had inputs of pressure and humidity with a result of whether it rained or didnât, and you had to write an algorithm with 3 weights for 2 inputs and those weights had to be worked out with a function that looked at all the input data to estimate the best weighting using the perceptron learning rule. The inputs were minimal and the result was easy, rain or no rain, but how many inputs will be required and what results will be required in a ML triathlon coach model.
People will lap it up though, 99.9% will not understand what it is doing and the few will understand its a marketing gimmick that is doing little more than looking at a load of TP inputs, and measuring expected CTL at end of it.
itâs the potential implications of this that may flex over time that i cannot see AI fully appreciatingâŚright now it might be immotive and affect the session, next week it might be forgotten, in another week it may be a stressor because Sean needs dropping off before a session etcâŚ
and that is not what a GPT is designed to do. they are language models. We could utilise ML to add more context to that language model, but as I hinted at above, Garmin does not have government funding and the server power to get close to that level of processing, certainly not for considerable time yet. They will be doing little more than playing currently.
I could knock up a python script to perform âAIâ on training data. I believe Alan Couzens has a load of sample python on his website that scrapes TP data that he uses to check fatigue, HRV etc. With access to the API and you could write some simple enough code to create sessions based on templates looking at the inputs from TP, but thatâs adding little value, and is all that most of these âAIâ apps are doing. MySwimPro claims to be AI, but all itâs doing is adjusting pre banked sessions with times and turnarounds based on your previous sessions and swim test sets.
Do not tell Chat GPT youâre Sarah Connor.
While I fully appreciate that a human coach can make the logic jump from âSean failed his driving testâ to how that could impact Seanâs dadâs training, the AI maybe doesnât need to. Assuming that Seanâs dad is using AI coaching knowingly and willingly, they should give it inputs that will result in the most meaningful feedback. It is for Seanâs dad to make the leap and input âI canât train on Thursday as I have to drive my son to football practiceâ, or âGoing out for a pizza and beer on Saturday to console my sonâ. The athlete has to make the initial assessment, but that is why I see AI as a step up from a boiler plate training plan, but a step down from personal coach.
Seanâs mum, but hey hoâŚ
but thatâs not what it saysâŚ
in most cases the athlete cannot make the initial assessment, in these cases where it is not an either/or the coach can potentially better evaluate the concern and thereby adapt the intervention more effectivelyâŚ