Tony's main research interest is development in autism and the clinical application of this work via screening, diagnostic, epidemiological, intervention, and family studies. During this podcast he chats to Sue about a piece of work looking at outcomes from an autism early screening study that taught him a lot about the value of sample, cohorts and methods.
You can find out more about Tony's work at his online profile. You can follow Tony on Twitter here.
The paper being discussed in this podcast is:
A screening instrument for autism at 18 months of age: a 6-year follow-up study. Baird G, Charman T, Baron-Cohen S, Cox A, Swettenham J, Wheelwright S, Drew A. J Am Acad Child Adolesc Psychiatry. 2000 Jun;39(6):694-702. doi: 10.1097/00004583-200006000-00007.
[Podcast jingle][ringtone] Hello? Oh, it is recording. I see the little figure. Okay, great. I will do my little spiel and then I'll introduce you. Nice. Okay. Here I go. Hi, I'm Sue from the Salvesen Mindroom Research Centre. And today I'm talking to Tony Charman from the Institute of Psychiatry, Psychology and Neuroscience at King's College, London. Tony is, um, uh... very well-known for his work on early development, in autism in particular. And he's going to talk to me today about a paper that was published in 2000 with, um, the title"A screening instrument for autism at 18 months of age: A six year follow-up study". So hello, Tony. Thank you very much for coming on the podcast.Tony:
Hi, Sue. It's nice to see you. I'm looking forward to talking about the work, but thanks for inviting me.Sue:
You are extremely welcome. I'm delighted that you could manage it. So why don't you start by telling me what was the kind of main finding from this bit of research that you've chosen to talk about?Tony:
Yeah, so the, um, the main finding was that this was a follow-up of a population that we'd screened at 18 months of age with a, um, with a checklist, essentially. It's a really quick checklist that GPs and health visitors and parents completed together asking, um, in about 12 questions about some early signs and symptoms of autism at the age of 18 months. And we previously positiv-, um, we previously published some positive findings showing it was possible, really for the first time, to prospectively identify children with autism using the screening method from a community, you know, as young as 18 months of age. Anyway, that was several years before this. And then we went back, you know, when the population was age seven, to find all the children in that community who had a diagnosis of autism, and this was from a population of 16,000 infants. So it's a large sort of study. Um, and then what we found in this follow-up, you know, was the actually although we had identified some children with autism, actually, we'd missed more than we'd identified. So, um, we can talk about it in more detail, but in sort of instruments screening terms, the screen turned out to have a rather low sensitivity. So it only identified just around 40% of the children who went on to have a diagnosis of autism. That's missing more than it identified. So in, in some senses, this was a bit of a downer, um, in terms of it wasn't as positive as perhaps we and others would have hoped for, in terms of demonstrating that the screen was accurate as we hoped. But it, the reason I sort of chose it is partly, um, lots of things came out of this study, you know, in terms of both the science, but probably in terms of thinking about, um, what I've gone on to do since, and the things that I've learned both from doing this study and from the work I've done subsequently.Sue:
Well, I think it's great Tony that... I think, for kind of early career researchers especially, to hear, um, that you might consider a study that was, in inverted commas, a"failure", you know, as you say, not exactly the result that you perhaps you were hoping for, um, to talk about today. I think that's really encouraging because, um, you know, I, I agree with you sometimes these, these results that don't pan out the way we want turn out, nonetheless, to have a really massive influence on how we do research, and so on. So, so that's fantastic. Um, so, so tell us a bit more about the background, um, uh, in terms of, you know, how early identification of, of autism was done at the time of the study, you know, about 20 years ago. And, and, and um... And what, what motivated you to kind of investigate this at the time?Tony:
Well... Well in fact it's one of the stories that goes along within sort of 20 years, the paper was published in 2000, but we'd be down planning, planning the study, I remember it was initiated by my mentor Simon Baron-Cohen, who I'd worked with previously, and that was back in 1991. So that's when the grant was written and we started the study the beginning of, uh, 1992. Um, so, well, um, you know, one of the things sort of back then nearly 30 years ago is, it was the case that, you know, um, autism was diagnosed, you know, um, autism was always, and still is diagnosed with, uh, different ages in different individuals for all sorts of reasons, but early identification, as young as the age of two was really very rare, both in the UK, but also just internationally. Um, but some of the work that Simon had been doing, and some work that, um, I had gotten involved in... Groups, and work that other groups did, it certainly seem to show that in young children with autism, so preschool kids, three or four year olds with autism, some of the things that they had most problems with, um, were things, um, about early social communication skills. So these are nonverbal skills, like a classic one would be joint attention abilities, so the ability to reference, um, jointly with an object in the world and, you know, um, um, an adult. So it would be something like, you know, a cat jumping onto a window sill, and a baby would look around at 18 months of age, look at the cat, maybe mouth, something like a proto word like cat or something, perhaps point or just smile and look back at their mom or their dad. And that's a sort of joint attention episode. And what we knew from typical development was that actually these things aren in place in most typically developing children by about the age of 18 months or so. So the idea there was, and this was, you know, Simon's, as he is, you know, he's great at big ideas. This was a big, novel idea. He thought, well, why don't we actually implement this? And if most children in the community, most typically developing infants are going to be showing early joint attention, indeed early pretend play skills by the age of 18 months, maybe the children who are not showing those skills, or behind in those sorts of early social communication abilities, might actually be children with autism. So the, he did some work, that I became involved in, looking at how they seemed to work out in young children with autism. And it did seem to be the case that they were doing worse on the early... the early versions of the screen that we had. And then in 1992, we started screening as large... a large community. So we aim to screen up to 40,000 children. We worked with, um, colleagues, um, and then at Guy's Hospital, Gillian Baird, she's a pediatrician, and Tony Cox, he's a child psychiatrist, and aimed to screen 40,000 children in the community in South Thames. This was a large MRC, medical research council, funding grant that I became involved in after I qualified as a clinician. I'd worked with Simon in my, um, in my masters and research studies then, um. So I became involved in the study and we did, you know, managed to screen 16 of the, um, of the 40,000 sort of, children, and did report in an earlier paper that we published fairly early on in the study, in 1996, that indeed it was the case that some children, that we as a team gave a diagnosis of autism to, a diagnosis of autism to, around the age of two years of age- so very young- had been picked up by the screen. So the story then was: this screen that looks at early joint attention and prem- um, pretend play skills can identify children with autism. And almost in some ways it was only then that our minds turned to what we should do next. So what we did next was, and this is sort of part of the reason I've chosen this- because one of the interesting things about sort of all sorts of research, all research studies, certainly this sort of research study, that's aiming very high because it's aiming to sort of say, could something have clinical utility in the general population- so in the public health sort of system- we needed, we realized that we needed to work out which children in the population didn't have autism. So our initial focus had been on looking at the children we identified and following them up, including children who were picked up by the screen who didn't have autism at the age of two, some of whom we saw again at three or four, who we then did think had autism because, um, the manifestations of the signs and symptoms have become clearer over time. So our question was sort of emerging. And then the big question that began to loom, and there's another big question coming, which I won't mention, but that's the really interesting thing about this sort of line of research, was, well, if we really want to know how well the screen does, we need to find all the kids with autism in the population. And that's just a question we hadn't previously realized that we'd have to ask ourselves, or that we'd have to go out and investigate, you know, in this sort of population. Um, so, um, I then began to work with Gilliam Baird who's the first author on this sort of paper. Um, and, um, she was the lead, uh, pediatrician for all the community services in South Thames where the study was ran. So she knew all the pediatricians and the child development teams that were diagnosing young children with autism. And we went essentially through all of their records to find all the cases from the population who had a diagnosis at the age of seven. And that enabled us, enabled me with Gillian, to write this paper. I did the analysis, I drafted this manuscript. So it was really Gillian and myself, you know, who, um, who ran this sort of follow-up study, and lo and behold, the story was not as positive as we were hoping that it would be. Um, so, um, um, that's... That's what happened over, over the eight years that led up to this publication that started out with a very ambitious overall way and found something initially very positive. And then as I've already said, and you've already reflected, we found a rather more salutary message when we did this sort of follow up study. And I guess that the first reason I chose this paper is that it taught me that however tough the answers, good methods are really, really important in research, particularly when you're addressing clinical questions. And that, you know, the more you learn about good methods early on, the better research that you'll do.Sue:
So, um, so Tony, in terms of the analysis, I mean, it's, it's kind of amazing that you managed to do such a huge comprehensive look at the... the kind of case outcomes from that population, right? And as you say, that didn't work out as well as you wanted in terms of sensitivity, but your specificity was pretty high. And I just wonder if the people listening who are less familiar with the distinction between those two, could you just give us a quick, um, tutorial on the difference between sensitivity and specificity?Tony:
Yeah, sure. So sensitivity is the proportion of individuals with the condition that you're looking for, who, who a screening, a test, or a screening instrument- in this case, this one page behavioral screen that focuses on early joint attention and pretend play skills- um, um, uh, um, the sensitivity is the proportion of individuals with the condition, um, who the screen or the test identifies as sort of positive. So those are children who were scoring above the threshold we set for children who were doing not well on joint attention skills and not well on pretend play skills- um, um, and as I said, around 40%, 38% of the children with autism was screened positive who went on at age seven, to have autism. Sensitivity is, is sort of like the reverse in a way. So that's really the children who are negative on the screen, who aren't identified as being, you know, at risk of scoring positive, you know, um, who don't in fact have the condition. So in a... in a study like ours, where we're looking for autism in the general population where, you know, and this, this comes to one of the other stories that come out of this study, where we know now, that we didn't at the time, and this study became part of the pathway to us actually investigating how common autism was in that population- and that when you're looking for a rare condition, let's say using the current figure, the prevalence of autism in childhood is something like 1%. Um, but you know, at the time it was considerably... considered much rarer than that, since specificity is always going to be very high, because most children in a population are not going to have autism. And any reasonable screening instrument does not learn to identify as potentially at risk or screen positive most cases. So one of the things about these sort of technicals and properties of screening instruments or tests of any kind, sensitivity and specificity, is that they're sort of population specific and very much how you read what's a good sensitivity, and how you read what's a good specificity, depend on the population that you're studying, and how many cases of, of the condition you're looking for, you know, actually reside in that population. So there's one other sort of technical property of screens, or there are two others, but the other important one, the people in the screening that should write about, it is a property called the"positive predictive value". Sometimes is shortened to PPV. The positive predictive value says"of the children who were screened positive, how many of those actually have the condition you were looking for?" So the PPV of the CHAT, it's the Checklist for Autism in Toddlers, that I haven't mentioned yet, that's the one page screening instrument that was used in the community, the positive predictive value, um, of the CHAT, the screening instrument was not bad in this sort of study, it's, it's got a moderate value. There are two different values which get sort of confusing: One of them is quite high, one of them is moderate to sort of low. So overall, um, different ways of thinking about the accuracy of the screening instrument, tell different stories. But in some ways, you know, um, those are the technical things, um, that, um, always are useful when you're thinking about how screening would work, you know, um, in a particular population for a particular reason, whether that's in a research study or in, you know, in actual clinical practice. What we, um, what we, um, sorry, I'm just distracted Sue because my printer is going manic, which is not me. Someone else is printing, my partner, and I've texted her to say,"Please, can you stop?" And I guess she's not got that yet, so I'm having to talk, um, whilst the print queue is just going berserk, um, I'm sorry. So those are technical. Those are technical sort of, um, parameters as we call them, in sort of a screening instruments, but in some senses how to under this- understand this research is probably best, um, um, contextualized by either discussions that we had around this time and afterwards with the, I think with the national screening committee. So this is the NHS committee, um, and a whole series of sort of, you know, uh, layers of NHS, um, sort of bureaucracy, but also sort of clinicians, that, you know, that we, some of whom we knew, um, who make decisions about what health surveillance should go on in community settings. And the health surveillance committee, as it was known at the time got wind, that we were screening for autism and had an MRC funded study. So they wanted to know how was this screen doing? But one of the things I remember is going to this meeting with Gillian in London, I work in London, but going to some meeting in London with this committee with some pediatricians and, um, um, and health surveillance experts that she knew that I didn't, I knew, um, I knew somewhat cause I had just moved to the Institute of Child Health, and that's where a few of them sort of worked. And we showed them this data before it was sort of published and you could see their faces were falling around the sort of meeting, because they heard about this very promising instrument that we'd previously sort of published on. But of course, when you think about it in public health surveillance terms, it just doesn't come up to the mark to be taken seriously as a, as a, as a population surveillance screening tool. So when you set that very high threshold for"could this be used in the NHS services in the UK?", the answer is a very clear"No". Which is interesting because that meant that we ended up taking a relatively conservative, or I'd now say salutary sort of approach to sort of talking about how well this screen did, but that's in the, in the context of working in the UK, where we were testing it in publicly funded health services, through a medical research council grant. Our colleagues across the pond in North America, but then a lot of similar studies with moder-, modified versions of our original screen, and there, the story for 20 years has been much, much more positive. Which is a different sort of lesson I think that I've learned, from being involved in this sort of study where people can't work out our North American colleagues, why we're so negative about what we found, but that's because they're operating in a different place.Sue:
And of course, debates about screening, population level screening, for autism, you know, continue to be quite relatively high profile, I would say, you know. So it's still a conversation all these years later: should we be, as a matter of course, um, investigating young children to see if they might be autistic or not. And if so, how on earth would you go about doing it? So, you know, it's, it's striking isn't it in some ways, um, that, that, that nut has not yet been cracked. And so I wonder what you think, I wonder what you think are the lessons to learn from this, you know, should we just be focusing on things other than early screening? Are there other ways to have the best possible impact? Um, the other thing I'm curious about, sorry, this is now a double question, which is very bad manners of me. Um, the other thing I'm really curious in your opinion about is that often one of the reasons given for early screening is that then it permits early intervention. And there's a big question mark over whether we've really got the resources to do that well either. So do you want to, do you want to investigate any of that kind of worms, Tony?Tony:
They're all great questions. And they're ones I've thought about since, and um, you know, my views changed over time and even, probably in the last few years my views have changed. We have run other studies that I won't talk about where we have done screening, um, using the, the American modified version of our instrument, another early screening instrument, um actually in community services for children who are referred rather than the general population. Um, and I also have been involved again with Simon in another, um, gosh, you know, um, study that we've been doing for a long, long time now, another modification of this screen called the Quantitative Checklist for Autism in Toddlers, the Q-CHAT, but anyway, putting those other different sort of studies aside, the question is still an open one. And still as you've indicated a contentious one, I've already said, I think there is a difference between attitudes. I've characterized that as North Atlantic versus the sort of UK, to some extent in sort of Europe and, and, um... In some ways it depends a little bit on what I was sort of saying before, which is sort of whether or not you're making recommendations for sort of um, universal screening as a sort of health surveillance policy on sort of practice, cause that has implications. Um, certainly, you know, there's a sort of nice sort of phrase that colleagues again, I'm thinking of, um, American sort of colleagues, mostly in North America, uh, that I like, I quite like it. So that's, you know, it doesn't necessarily lead very clearly on from what we found, but I think"screen early and screen often" is quite a nice sort of idea. The idea there is that no single application of a test is gonna all, pick up all the cases of a condition like autism. The manifestations of autism are gonna be different in different individuals at different ages. Some individuals have more severe presentations than others, and others have milder presentations because they have higher levels of general ability or high levels of la-, language and communication skills, even the way that the diagnostic criteria for autism were rewritten, which I'm really very supportive of in, you know, back in 2013, um, DSM 5 or the coming currently beta version of ICD 11. So the diagnostic manuals where it has that very nice additional sort of caveat that, um, symptoms may not be manifest and recognizable until demands exceed capacity. You know, we know some individuals with autism may not be recognized- that's not a good thing necessarily- until they're adults, you know. So, so, so a single application of a test at one time point in every individual is not going to identify all the cases, but the, you know, individuals with autism, and in fact, a range of neurodevelopmental sort of conditions can be identified in sort of development to different sort of points in time. So that sort of positive"screen early, and sort of screen often" is saying"well, when you do look at social communication skills, when you do as some of the newest screens, do, you know, look at emerging, rigid, or repetitive behaviors, where you do look at early delays in language development, you will identify some children who have a need to be identified because their development is not going right". Sometimes those children will have autism, sometimes that have general developmental delay, sometimes they'll have language and other communication delays, and sometimes they may have a early emerging conditions like attention deficit hyperactivity disorder. So one of the sort of interesting sort of, you know, um, conversations in the screening, which has always been"how much did you screen for autism specifically" as opposed to screening for children with neurodevelopmental conditions just much more broadly. I happen to fall into the screen for all neurodevelopmental conditions camp, partly because that's what the screening instruments, including our screen, the CHAT, picked up. So I've not said that some of the children who were screening positive, who didn't have autism actually had language or other developmental delays. Picking them up is a good thing because they're children who could benefit from early intervention, early support services. If I think back to the interactions with ethics committees back in 1991 when we started this study, people we're saying"if we can't really diagnose, um, children with autism, as young as the age of two, why would you identify children who might have autism? If you have no services to offer for those children, why would you be giving the diagnosis? You'll be troubling parents, you know, causing harm in some sort of way". Of course, the fact is, and my very strong view is the children would be developing in that way, whether we were doing anything or not, whether we were running the study or not. Parents of young children who have emerging autism as early as that age, and if many of these children who were picked up by the CHAT screening instrument at 18 months of age, had quite severe presentations of autism. They were really quite impaired in terms of many were nonverbal, even at three or four, when we saw them many had intellectual disability. Those parents often quite early on in their third year of life would know that something wasn't right. Often at the point of screening they didn't know that, but I know from, from, from clinical practice over the years since then, you know, just because parents don't know what's wrong, they know that things are not right with their child. And their child is very difficult to manage, and that child is not learning. So the arguments about"well, if we're not sure about diagnosis... Well, if there aren't services...", in a sense, I mean, I strongly feel that that's almost like a discrimination against children with disability. That actually children with disability, you know, whether we have worked out how, how to research and how to study, how to characterize the problems they have, whether the NHS clinical services have worked out what they could be or should be doing, and how to support parents and provide direct support for children..., letting those children develop, and then, in a sense, almost face some secondary difficulties because the consequences of that unraveling developmental differences make their future development even more difficult. Actually it isn't really providing an appropriate or probably an ethical service. I've always, you can tell, quite strongly about this in a moralistic sort of ground, you know, but actually, um, um, it's a little bit like sort of, you know, sort of close your eyes, cover your ears. You know, if children are gonna go on to be on a trajectory of developmental disability, that means that they will benefit from support. But we as scientists and clinicians don't know how to do a better job about it, the answer is not to stop the search going ahead, the answer is to be better at research studies.Sue:
So the last question I wanted to ask you, Tony, is sort of slightly zoomed out, thinking about, you know, this kind of 30 year period that you've been working in this area that you've talked about, what do you think are some of the big changes that have happened in developmental psychology over that time? You know, I'm sure there are many of them, but, um, is there anything you'd like to pick out that you think is particularly interesting or, or maybe optimistic, or yeah.Tony:
Well, I d-, I mean, there are so many things Sue, and it's, you know, these are, these are always conversations that is fun to have with someone like you, but also with people in the audience, you know, um, um... So I'm gonna pick on some that I was thinking about at the week-end when I knew that I picked out this paper and, you know, you said you're gonna be interviewing me. And I told you at the week-end, what the paper is going to be. And I decided on it, you know, a while ago. And um, partly on-, let me think, I'm gonna be self referential here and think back about why this paper is important and partly how that's influenced what I've done in the 20 years since, and then I'm going to broaden out for me and, you know, think about the sort of field, you know, and some of the things that are both positive and sort of negative about changes. I mean, for me, I mentioned before about sort of, it taught me something about rigorous methods and sort of also just research designs. And, you know, I would indicate that there was another story that came out of this, which was: when we did the follow up at age seven of all the children who had been screened, we had identify all the cases in these 12 districts in South Thames of children at age seven who had autism. And by doing that without realizing until we got a bit closer to that part of the study, um, it became clear that we were in essence doing a prevalence study, counting how many cases with autism, had a diagnosis by the age of seven, from a population of 16,000. Lo and behold, that actually ended up being, um, 60 per 10,000, more commonly that would be put as 0.6%. Now that was way higher than any published prevalence estimate of autism at the time. Then again, thinking about methods, what we realized, Gillian and myself, is that we didn't really know how to do prevalence studies. So we went out and found people who did, so we hooked up with Andrew Pickles and Emily Simonoff, who I've worked with, um, uh, uh, happily for the past 20 years, and I work with today, because you know, we had to work, we had to do a proper epidemiological study, which actually turned out to be the first study that said the prevalence is as high as one in a hundred children. So our, our population prevalence from the next study, which was related to this, but, but a different cohort, some were the same children but most weren't, um, was just over one in a hundred, so just over 1%. Um, but I think then sort of for me, and partly this sort of study, was one, you know, it became clear to me that there was huge value in longitudinal, you know, um, studies, particularly longitudinal studies of key cohorts. So this study, this population became a key cohort, one because screened them in the first place. Now, no one else at the time. I mean, actually without us knowing around at the same time about, about a year or 18 months behind us, but they weren't so far behind us in terms of publication, Dutch colleagues were actually screening 30,000 children. So twice the population size that we had, not at 18 months of age, but at 14 months of age. So, um, fortunately we didn't know that at the time, but there was someone who, you know, who I now collaborate, Jan Buitelaar's group, and now I got to know, um, him and his group, and I still know some of them, you know, um, today I still collaborate with Jan and some of his colleagues in the Netherlands. And so lo and behold, you know, perhaps not unsurprisingly other people had the same idea. But for me, this became for me a really important sort of cohort, because while we could study both at a population level, but also in terms of the children we identified, children seen at a very young age, and indeed we did see these children at three or four years of age. We did see some of them at seven. And then we have subsequently gone on to see some of the population, you know, some of the wider population in our screening study, including some children who were involved in the CHAT study at age 23 in a follow-up of our population that Emily Simonoff and myself have, um, completed a few years ago, we've just begun to sort of publish on. And what I've gone on to do since is establish with colleagues, a number of key longitudinal cohorts, both from our population studies with, um, colleagues, Mark Johnson, Emily Jones, you know she's from Birkbeck, Mark is now in Cambridge, in the family elevated likelihood studies that we run of, um, uh, uh, infants who got brothers and sisters with autism and indeed, and now brothers and sisters with ADHD or genetic conditions in the BASIS STAARS collaborations. And then a whole bunch of longitudinal intervention sort of cohorts. So broadening out one, one of the things I think that has changed a lot is there was fairly little longitudinal research going on 30 years ago or even 20 years ago. And when it was, it was really quite small scale. Small scale studies are fine, but there are things both that larger studies in terms of numbers, but also more ambitious in terms of design, and more rigorous in terms of method, give you the-, allow you to answer questions in a special way. One of the key things is who's your sample. And if you can, there are things that are, you know, certainly as a researcher and as a, you know, as a career academic, there are things that are clear advantages from understanding methods to think about who valuable cohorts might be and may be valuable because of what you've done with them, because of where they come from, because of when you, when or where you found them from, is that in terms of, um, cohorts that help you to ask, um, key long, um, clinical longitudinal questions become very valuable. They sort of become valuable too, because you have quite fundable sort of approaches to make, you know, where you've solved part of the problem. So when you're going to solve the next part of the problem you've already sort of, you know, done a lot of the sort of groundwork. One example of that would be, you know, our most recent grant with the Medical Research Council again, has been to follow up some of the, um, children we initially saw as infants, you know, who, um, who were brothers and sisters of a child already with a diagnosis, the focus there was not actually on autism outcomes, but it was on emerging ADHD and anxiety in mid-childhood. But because we've been studying those children for five years already, um, it was sort of, you know, being public money and charity money that then relatively modest, additional funds could allow us to answer a question in a unique way. So I think um... But that's sort of, you know, for me, the value of methods, both the longitudinal nature and the value of special cohorts has been important. I think there are some downsides, not everything can be big scale, how you find your place as an early career, as a junior sort of researcher, and how you carve out a territory to ask your own questions, to establish your sort of self. I was super lucky to fit, everything was just different, you know. When Simon and Gillian and Tony Cox started to study, I wasn't involved in writing the MRC grant, but I didn't work on the grant either. I had a faculty job and I didn't have a PhD because that's how things happen back 30 years ago, it was, you know, it was a beautiful time! Um, and you know, I want, you know, I had a sort of position, but it meant that almost I could sort of launch and become part of this. This study again, and the reason I picked this paper was, although Simon started the study, this was the study that Gilly and I completed. This was her going around to all the clinicians, and working out which kids in all the services had autism. And it was me crunching data at the weekend in the rudimentary way I used to in SPSS, horrible program, you know, um, in a very, very, very sort of simple sort of way, but also writing papers, which I'd learned quite quickly, how to do, but from there on and after this, in a sense, I was one of the people who, who was, who was sort of, you know, writing the grants and leading studies. So in some ways this was not for me a coming of age paper, but it does indicate that from when we started the study in 1991 to when we published this in the year 2000, I'd learned a lot, and boy I wish I'd learned those lessons earlier on, and the things I've learned in the 20 years since I wish I had learned those things 30 years ago.[Laugh] But you know, for me, it's been, you know, complete privilege and pleasure and a whole lot of fun as a person and as a scientist. But I think there are challenges, and it isn't that everything's gotta be big scale and ambitious, but I do think that methods have got to be rigorous, particularly where you're addressing clinical issues. And if you work in the autism field, as I do both as a clinician and as a scientist, then, you know, part of the privilege of working with, um, special families, as I would say is, uh, making sure that you do your best job, and sometimes doing the best job means the methods. And the findings will tell you, you didn't, you didn't find what you expected. So that goes back to where I started thinking about this paper in a sense, this was a disappointment, but it's not the, it's not last one that I've had to endure, you know, um, and, um, you know, um, I'm sure it won't be. One of the things I say in talks all the time, so I'll say in this conversation with you now is one of the sort of jeopardies of longitudinal studies, now I know you're, you yourself are involved in some, is your mistakes live with you for a long time?Sue:
Yeah, I do have some, uh, sympathy with that position. It's extremely agonizing. And when you're designing a new one, you feel the enormous pressure of wanting to try to get it right. And the inevitability of knowing that you will get at least some bits of it wrong. And as you say, we'll be stuck with that for years to come. Um, well, that was an amazing conversation, Tony, thank you so much for your time. And anyone listening, you'll be able to find out more about Tony's work by following the links on the podcast page at Buzzsprout and, um, Tony, thank you so much. And, um, I know that everyone's gonna love hearing everything you've had to say today.Tony:
It's been my pleasure, lovely to see you Sue! Take care!Sue:
Take care, Tony. Bye!Tony:
Okay, we did it! I thought that went quite smoothly![Podcast jingle]