0:08
So, the consort guidelines start with the title of the report,and the
title of the report should be succinct and this is not part of
consort, but it's a good idea to make it succinct because you are
usually limited on the number of words you can have in the title.
And if the title's too long, and people start to ignore part of
the title, and the consort statement says that,
k design terms should be included in the title,
such as trial, and randomized, and the title
should also include the treatments that are being evaluated.
And the disease or population that's being studied.
And this is helpful so that consumers
can scan titles when searching for relevant literature.
And here I've included an example of a title that I think is a well-done title.
This is the title
of a primary results paper for the must trial that
you've heard Janet and me talk about earlier in the course.
And the title was, Randomized Comparison
of Systemic Anti-Inflammatory Therapy ,versus Fluocinolone
Acetonide Implant for Intermediate, Posterior, and
Panuveitis: The Multicenter Uveitis Steroid Treatment Trial.
And you'll notice that it does have key design terms, both randomized and trial.
It tells the treatments that we're evaluating,
systemic therapy versus,the implant and it also
talks about the population that we're studying
and people with intermediate posterior or panuveitis.
1:28
And as I mentioned there is a separate consort statement about writing abstracts,
but there's also a bit about writing abstracts in the main consort statement.
And abstracts are extremely important because they are
the key to the future of the paper.
For the National Library of Medicine.
The abstract is the primary source for indexing terms.
And it's also what consumers usually read first,
and sometimes it's the only thing that consumers read.
1:55
So abstracts, according to CONSORT, should be structured, no free form abstracts.
And they need to have a design
statement, a method statement, results, and conclusions.
And the
specific structure will be dictated to some extent by the journal
that you submit to, but most of them follow this basic structure.
And again, with abstracts like the titles, it is important to be
syched because you'll usually only get 200 to 300 words for the abstract.
And those words have to be used to basically summarize the entire study.
And now we're going to move on to the body of the paper.
2:33
CONSORT has guidelines on how to write the introduction.
The introduction should talk about the background in
rationale, and describe why the trial was being done.
And it also needs to establish equipoise.
You'll remember from our ethics lecture that we
talked about the Declaration of Helsinki which says
that ethical research should not put people at
unneccessary risk and if we cannot establish equipoise
than we are putting people at unneccessary risk if one of
the treatments has obviously already shown to be better than the other.
And in the 2010 consort guidelines, they
added that ideally, the introduction should include a
systematic review of the literature that's been published
up to the point where the trial began.
And in the introduction section, we also need to state
what our objectives are for the trial and our hypotheses.
3:24
Now we'll move on to the method section of the paper and this is the
dry part that's usually written in small
print and some people skip this part completely.
But hopefully after you've taken this class
you'll read this part, much more carefully because
the focus of our class has really been on the methods of conducting clinical trials.
So the consort statement doesn't have a guideline
about ,a statement in the methods on IRB
review and approvals at the different clinics, but
most journals do require, that you have a
statement in their on where the protocol was
reviewed and that it was approved by local IRB's.,
You also need to have a description
of the trial design, including the allocation ratio.
The eligibility criteria should be explicitly defined.
And you need to mention if there were any issues in executing the criteria.
4:15
You need to describe the intervention in enough details so that it can
be replicated and how much detail
is needed, will depend on intervention itself.
If it's the standard intervention, you may need only a little bit of detail.
But if it's a new surgery, for example, you
might need quite a lot of detail in this section.
You need to describe the hierarchy of outcomes.
What are the primary outcomes and the secondary outcomes.
And you need to describe how each of
these outcomes was assessed and how they were defined.
You need to have information on how the sample size was
determined and all assumptions that went into determining that sample size.
So that a reader can look over your
assumptions and see if they think they look reasonable.
And that includes a description of what detectable difference you used in your
sample size, so that the reader can see if that was a clinically important
difference or if it's too large to be important.
5:18
And you need to talk about any important changes that happened during a trial.
It is not uncommon, at all, for changes to happen during a trial because
of information that is gained during the
trial or information outside of the trial and
we talked about this some in our data monitoring lecture.
And you just need to talk about what those
changes were and how you think they affected the trial.
5:56
So what we
need to know here, is whether it was. Possible to protect future randomizations.
So here, we're trying to establish that there was not selection bias.
And this doesn't have to be a detailed description.
It can be as simple as saying, that after confirming eligibility,
the clinic staff obtained treatment assignment centrally using a data system.
As long as we're clear that the clinic staff did not have access.
To the treatment assignment list.
Before the randomization was received.
The writer needs to discuss masking, whether or not there was
masking, and if so who was masked, and how masking was achieved.
Did you use over encapsulation or matching placebos?
Or where there sham surgeries?
6:37
The writer has to describe the statistical methods,
the methods for the primary and secondary outcomes.
So that the reader can decide whether or not they did the
analysis according to the design of the study.
They also need to discuss any subgroup analyses that were done.
And how they were done, so that the
reader can see if the analysis were performed appropriately.
And whether or not they were specified ahead of time or post talk.
We talked about subgroup analyses and it's fine to do
both analyses that are preplanned and analyses that are post talked.
We just need to be clear on which are which.
And also if you have any adjusted analysises you need
to state why those specific recoveries were chosen for adjusted analysises.
7:36
So one of the important
contributions of consort guidelines, has been to promote the use of flow
charts to describe how patients entered, exited, and were treated during the trial.
And here I have an example of a CONSORT flowchart
from the CONSORT website, and you can see at the
top that CONSORT recommends that you describe the people who
were assesessed for eligibility and why those people were not eligible.
And then the people who were randomized,
how many were allocated to each treatment group.
And then within those groups, how
many actually received the allocated intervention?
Because you almost always had at least one person who for some reason was
allocated to one treatment group or the
other, and did not receive their allocated treatment.
And then you described how people, went through the follow up process.
Who was lost to follow
up, and who discontinued the intervention for various reasons?
8:34
And here I have an example consort diagram for a
trial that was done in the center for treatment of asthma.
And at the top we have data on
1,309 potential participants that were assessed for eligibility.
And this part of the CONSORT dot flow chart is not always included in the paper,
because you can't always know exactly how
many people were assessed for eligibility, and
it can be problematic to collect data
on these people because they weren't always consenting.
And then you have different definitions of screening at all of the different clinics.
So this can be difficult to collect data on but in this case, we provided a
definition for screening, which, for this trial, screening including
only people who were actually approached about this Specific study and were
evaluated for eligibility. Of the 1,300 who were evaluated, 522
were excluded because they did not meet one or more of the eligibility criteria.
So in this trial, we had an enrollment phase before the randomization.
And this was done because one of the eligibility criteria was that we
were only including patients who were stable on corticosteroids for a month.
So after they were enrolled, if they
weren't already stable on corticosteroids they had
to Wait for a month to become
stable before they were eligible for randomization.
And at the bottom we have 500 participants who were randomized and here on the next
slide we see how they were allocated. 169 were allocated to fluticasone.
165 were allocated to the combination therapy.
Fluticasome and salmeterol and 166 were allocated to to montelukast.
And underneath we have the number who actually received their allocated
intervention and you'll notice almost everyone received it but there were.
Three people who did not receive their allocated intervention.
Underneath we describe follow-up.
During follow-up there were 13 participants, 16 participants and
20 participants in the different groups who discontinued intervention, and we
have the reasons for discontinuing Underneath, but this was an intention
to treat analysis, so you'll recall that even people who never
took their intervention or who discontinued
early were included in the analysis.
And at the bottom of the slide, you'll see
the number of people who were included in analysis.
So there were six people who had no follow-up data and they
were not included in this analysis.
But everyone else, regardless of whether or not they
took their assigned treatment, were included in the analysis.
11:12
And on this slide, I have another example.
It's very similar, but I just wanted to point out that
sometimes things happen in a trial that you absolutely cannot control.
But you can use the CONSORT diagram to describe them.
So in this case, we had a site that
was in Louisiana, and during Hurricane Katrina, the site was devastated
and we lost all of the data, and we stopped collecting
data at that site because the staff and the participants just
had other things that they needed to worry about at the time.
So we described that in the flow chart, we've described how the randomization was
originally 412, but we lost ten participants due to Hurricane Katrina.
There were five in each group. So the group that we actually continued
with was without those ten participants. And also in the text of the results
section, We need to discuss when the trial was actually conducted, the dates.
Because this helps for us to establish context.
12:13
We also have baseline data that describes
the demographic and the clinical characteristics of
the population, and that gives us an idea of the generalized ability and also.
It allows us to compare groups and see whether there are any imbalances.
In all results tables we need to know the number that were analyzed.
And this sometimes differs slightly from table to table.
But we need to know this number so we can see whether the analysis
was done by the original treatment assignment
and so we can see who was excluded.
An estimates of the outcomes, we need to know the treatment effect estimates.
So not just the estimates in each treatment group separately.
But an estimate of the difference or the relative effect.
And we need an estimate of uncertainty.
So standard error, standard deviations or confidence variables.
13:01
If there were ancillary analyses, we need to know the results.
In the sub groups if there are unadjusted and adjusted
analysis we need both of those.
And again, we need to be clear about which
ones were pre-specified and which ones were exoloratory and
the results section at least for the primary paper
Should have details on the harms, the adverse events.
How these were assessed, and the rates in each group.
13:56
Sometimes we see data on adverse
events grouped together.
Not specified by the severity or they type of adverse event and sometimes
we see adverse events reported only if they reach a certain frequency and.
I have to admit I do this sometimes because when you
gather data on adverse events we frequently use both standard reporting so
we have a list of events that we report and then
we ask them to tell us if any events that they are
experiencing aren't on the list.
So you end up with a lot of text that you have to parse through.
And you have one event by one person.
And if you listened to all of those, you wouldn't be
able to fit it into the length requirements for the paper.
So sometimes you do have to put some sort
of frequency cutoff for single adverse events that are reported.
And then there are occasions where you see adverse events In the paper,
only if they reach a certain threshold, and all of them should be reported and
sometimes you see adverse events reported only
counts, instead of the timing of the event.
So, it's possible that the adverse event is
occuring in both groups, but in one group it's
occuring earlier than the other group, so we
need to know the relative timing of the events.
And there are more items on this list,
and if you're interested, you can read the paper.
And learn more about the problems with reporting harms.
15:35
So tables and figures need to convey the
essence of the results without having to read
all of the results text.
And the legends are necissary to explain what is happening in
the figures And then tables, but they have to be succint.
And each of the tables and figures needs to provide both numerator and denominator
data, so you shouldn't provide just percentages,
because we don't know what the denominator is.
If you're going to provide a percentage,
we need both the numerator and the denominator.
And even
when we're talking about time-to-event data, it's good to
know how many people are in the risk set.
And for easy reading, we usually make the treatment comparisons
in the columns and we decimal align all the tables.
16:21
Table one is typically the baseline by characteristics by treatment group.
At various points in time, it's been the
fashion to either include or not include P-values
in Table one.
Currently, the fashion is not to report them.
Because by definition, if there is a baseline imbalance, it's by chance.
Because we randomized the treatment.
But the p-values do provide you with some
measure of potential confounding, but regardless of whether or
not the p-value is small for a specific
imbalance, if it's an important predictor of the outcome
it might still be a strong confounder.
So we need to carefully look over the table, not just
the p-values, but the actual values in the table and pick out.
Any potential imbalances that could be important and
might need to be adjusted for in the analysis.
17:12
In Table one, we usually express variability using standard deviations.
And again, for easier reading, it's good to decimal align the digits.
And report things at an appropriate precision level.
If you're measuring things on a whole number level, like one, two, three, then
you don't need to go out three decimal places to talk about the means.
17:36
And here's an example baseline characteristics table, and
this is an example from the consort group.
And in this this table, you'll that they have the treatment groups in the columns,
and they have the number of people allocated to each group, N equals
141 and N equals 142. They describe in the rows
how the variable is expressed, the mean age plus and minus the standard deviation.
And you'll notice they use reasonable precision.
They don't go out to many decimal places.
We only have one decimal place showing here.
They have smokers showing
in n percent.
So we have both numerator, and denominator data.
18:17
And this is another example from consort of a results table.
So in this result's table, they showed
both the primary and the secondary outcome.
And this was. A treatment for rheumatoid arthritis.
And they have the number and the perecent
of people meeting the primary outcome at 12 weeks.
And then they have the treatment
effect, which is the difference in the percents.
They have the treatment effect with the confidence interval,
and that's important to include a measure of the variability.
And the results table, we do include p-values.
18:50
And on this slide we have an example of
an Instance Curve, or an Inverse Kaplan Meier Curve.
And you'll notice, at the bottom, that we have
the number at risk at each of the time points.
And you can include this number at risk or
the number of people included at each of the
time points, also, if you have a continuous variable.
And this is a useful number because it lets us
know, especially out in the far right of the table.
If there's only a few people included in that group, then whether or not
we should believe any differences we see far out to the right of the table.
You'll notice underneath the figure, we have the number at risk in
each of the three treatment groups at each of the time points, and
this is an important number to include because it
lets us know, at the far right of the
table in particular, whether or not we have a
lot of people contributing to the estimates at that level.
19:41
And finally, the consort guidelines also talk about what
should be included in the discussion of the paper.
So first, we need an interpretation of the results.
We can give a conclusion of the study hypothesis and
reiterate the key results, not. All the results.
We don't want to recapitulate the entire results section.
And then we need to talk about the limitations,
and be honest, what went wrong in the trial.
Something always goes wrong.
We need to talk about what it was, what are potential
sources of bias, and where do we think we might have imprecision.
21:02
We need to know who funded the trial and what were the role of the funders.
Did they have a say in the design, the conduct, or the analysis of the trial.
Most journals have specific guidelines on how the conflicts of interest should
be reported, and how the role of the funder should be reported.
In section C, we're going to talk about how to evaluate
what's written to see whether or not the trial was conducted appropriately.