From OpenWetWare
Jump to: navigation, search


Home        People        Materials        Schedule        Help        Discussion        Science in the News        Productivity Tools/Apps       

~Gaelen's Wiki~

Week 7: Hsp90 versus Hsp70 - different effects on disease phenotype

Paper for this week:

Points of Discussion

0. Unfamiliar Techniques / General Questions

  • LUMIER Assay
    • Hi-Throughput, but I think a sort of janky technique overall... It's interesting that this technique is only compatible with answering one specific question, and I am kind of curious about how they ended up designing this experiment.
    • I really didn't like how they kept referencing the size of their library at 2300 genes, but they clearly only have 1628 genes with interaction data. I totally understand that it's hard to make a flag-tagged construct that properly express, so that's where the drop in quantified proteins comes from - but my larger question is whether or not every protein naturally interacts with the reported chaperones.
      • For instance, I've worked with 3xFlag tags a lot in these contexts, and it takes a substantial amount of optimization to figure out the best place to put the tag without affecting your bait's activity. My argument is that any non-tagged bait may not naturally interact with either Hsp90 or Hsp70, and that you're only measuring a perturbed interaction.
      • The way that the authors sort of approach this is by comparing the ratio of WT to Mutant interactions - so you'd expect that the 3xFlag tag would similarly affect the WT protein and a point mutant... Which is fine, I guess, but I feel like they were mostly just proud of this previous paper and wanted to mine it for another.
    • I guess my whole point is that it seems like a more-targeted, lower-throughput starting analysis would have impressed me more...
      • I was much more convinced when they started the deeper analysis on FANCA genes - and validated these findings by orthogonal Co-immunoprecipitation analysis.

1. Hsp90 versus Hsp70 interaction helps define variability in disease penetration

  • I think it's pretty interesting that there is a strong recapitulation of clinical relevance of a mutation and a bias of interaction with Hsp70.

2. Compensatory double-mutant for FA patients can eliminate Hsp90 dependence

  • I've always thought twin studies are the coolest way to look at phenotypic variation.
  • Comparing double-mutant-compensated cells to single-mutant-disease cells shows that the double mutants show less interaction with chaperones, and that the disease mutants are partially buffered by Hsp90 binding - but that buffering is easily overwhelmed.
  • In this twin comparison, the authors show that the phenotypic variation in genetically identical individuals can be mediated by interaction with Hsp90.

3. Could proteasome inhibition be a potential route of therapy for patients who have a rare mutation that biases toward Hsp70 instead of Hsp90?

Week 6: Cohesin Loss Eliminates All Loop Domains

<p> Paper for this week:

Points of Discussion

0. Unfamiliar Techniques / General Questions

  • AID system ~~> Inducible degradation of target protein
    • Why aren't more people using this?? It seems super cool!! I've always been interested in how to knock down protein activity if there's too little degradation for RNAi - and this seems like it's perfect.
    • A long-term way of quickly abolishing protein activity could be the combo of AID and RNAi?
    • The more I'm reading about it, the more I'm getting into this method! I'd be stoked to use it!
  • Low-Res HiC versus Hi-Res HiC?
    • Is the difference just depth of sequencing?
    • They implemented an analytical method to enhance the signal of Low-Res data, was it actually that much harder to do HiRes?
  • nth dimensional analysis?
    • This is probably why I will never be a real data scientist - I just have a hard time visualizing and interpreting something when someone tells me to consider the nth dimension...
  • PROseq / GLOseq
    • What is?
    • My impression is that this is sort of the RNA equivalent to a pulse-chase experiment: you specifically label newly synthesized RNA molecules starting at a specific time-point, then compare that time-point in treated vs untreated cells. This lets you disregard RNA molecules which were synthesized before treatment, and I guess this would enhance the transcriptional changes resulting from treatment... But it seems like people have been using standard RNAseq data in these differential analyses for a long time, and it was pretty effective, right?
    • Why this instead of standard RNAseq?
  • I like the in silico simulation of expected contact points, but the funky blobs of spaghetti at the bottom of figure 6 are kind of silly - did it actually contribute to the conclusions to show two effectively-identical blobs?
    • EDIT: Sorry, my printout was in black and white and I couldn't tell that there was actually a real difference. Yah, I can agree that the left blob (untreated) has more interesting and diverse contact mapping than the right blob (treated).

Week 5: Just a TAD misregulated; Oct 31, 2017

Paper for this week:

Points of Discussion

0. Unfamiliar techniques / General Questions

  • The authors use a few sequencing techniques that I haven't learned about in the past, namely: 4C-Seq and Hi-C.
  • It seems strange to me that for the Doublefoot (Dbf) mouse, they describe the utility of the mouse model by saying that two out of three phenotypes closely resemble the human disease, but that seems like a low representation to me... It would have been nice to see them do genotyping to say that in the non-representative phenotype, their editing didn't work as planned.

1. Model selection and verification

  • The authors selected three human diseases which are all similar but distinct:
    • All three diseases involve a deletion/insertion/translocation which results in a gene from a distant TAD being brought into proximity of the TAD containing EPHA4.
  • In an understated feat, the authors create three mice lines with comparable deletions/insertions/translocations which result in nearly identical phenotypes as the human patients.
    • This is a use of CRISPR/Cas9 that I rarely consider: I usually think of it as a tool for targeted gene insertion, but this is actually closer to what the Cas9 endonuclease was really intended to do...
    • We just learned about the tools in mice genetics, so I feel like we should have an appreciation for the work/money that went into this study. The authors had to perform CRISPR editing and selection on mouse ESCs and then fertilize mouse oocytes with the successfully-edited ESCs. It's shockingly convenient that their models perfectly encapsulate the human disease phenotype, because if these models turned out to not work, a whoooole lot of money would have gone straight down the drain...
    • I do understand that they compared the change in TAD structure in their ESCs to patient TAD structure, but still, I'm impressed at the work that goes into preparing three mouse lines that end up characterizing the desired disease phenotype.

2. Characterizing misexpression of genes in mouse models

  • The authors use RNA-Seq to show the typical expression of their four regions of interest (Epha4, and the three disease genes Pax3, Wnt6, and Ihh) in WT mouse embryonic limb buds.
    • Each gene has a unique expression pattern that is clearly highly regulated (different tissues in different parts of the limb)
  • The authors then show by RNA-Seq that the mutant models all show a change in expression of their disease gene that ends up matching the WT expression pattern of Epha4.
    • They also show that these gene loci interact specifically with gene-containing regions of the Epha4 TAD, with little-to-no interaction with non-coding sequences of this TAD.
  • Quite interestingly, the authors verify that each of the gene loci of their mutants interact specifically with the Epha4 TAD, and do not interact with each other (there is a high fidelity of structural interaction in each of these disease phenotypes).
    • I would guess that in cases in which the disease loci are interacting somewhere else, you would get a different disease phenotype - the authors selected these mice lines based on their conforming with the desired phenotype. Presumably, there are other diseases/phenotypes associated with other TAD interactions.

3. Verifying the value of these mice lines as disease models (translative capacity)

  • The authors note that TAD structure is conserved both across species and across cell lines and therefore, the inversions/deletions/translocations in their mouse models ought to recapitulate the disease state in humans. They also thought to test patient human adult fibroblasts to show that, in comparison to matched human control wt individuals, the 4C-Seq and HiC seq profiles perfectly match those of their mouse model.
    • This basically brings me back to my appreciation for this model. It just seems like so many things could be different between these mice and humans that I wouldn't have expected a perfect recapitulation of a complex human developmental disease in a mouse model. But it seems like this is a powerful tool.

4. Showing interaction of disease loci with Epha4-TAD enhancers

  • This is a section of the paper in which the methodology is a little iffy for me.
  • The authors used a LacZ transgene reporter to show that a specific set of enhancers in the Epha4-TAD region are involved in the expression of the disease loci of the mutant strains.
    • They showed by 4C-seq that the promoters of the disease genes interact ectopically with all three of the interrogated enhancers in the Epha4-TAD region.

5. Changes in regulation of genes based on the boundaries of TADs

  • The protein CTCF is enriched at the boundaries of TADs.
  • The authors created ANOTHER set of mutant mice which included similar translocations/inversions/deletions for each disease model, but kept the boundaries of the respective TADs intact.
    • Jesus, does that mean they made 6 mouse lines for this paper?? Am I being absurd by thinking that this is so much work?
  • These new mutant mice show no aberrant limb formation, suggesting that it is the disruption of TAD boundaries which dictates the disease phenotype/ limb malformation.

Week 4 Rhino Hunting; Oct 24, 2017

Paper for this week:

Points of Discussion

1. I love this paper and all papers like it: Hardcore characterization of a new protein to update a previously incorrect model or paradox.

  • The authors started with a confusing cellular outcome: the selective transcription of transposon-repressing elements (piRNAs) encoded in heterochromatinized sequence.
  • They investigated the two possible models of this heterochromatin-transcribing mechanism: 1) piRNA is transcribed based on the continued transcription of flanking genes, 2) piRNA has a special mechanism with an independent initiation mechanism. They used clever manipulation of flanking promoters to debunk the first model and favor the second model.
  • Next, the authors used a transposon de-repression screen to identify a previously-uncharacterized protein that they subsequently renamed to Moonshiner (Great name, btw).
  • The whole rest of the paper is focused on determining how this Moonshiner protein functions and what it does.

2. Methods for characterization

  • The authors used an effective array of techniques to investigate the roles and behaviors of Moonshiner. I think that they did a great job of addressing each of their questions, and it seems like they got to try out a ton of exciting techniques to prove each of their points. I would be super stoked to have the chance to plan out and implement this kind of study.
  • I spent three years doing interaction proteomics, so I really appreciate that they used a quantitative proteomics technique to do their experiments - even if the technique isn't all that cool. Label Free Quantitation is a hassle, and I'm not sure why they used it instead of an isobaric labeling technique, but their LFQ clearly worked well considering it perfectly enriched what turned out to be all of the binding partners of this Moonshiner protein. I appreciate that they validated their results by doing a reciprocal pulldown,
  • They next used RNA sequencing to show that Moonshiner and Rhino mutants show a reduced degree of piRNA transcription (and unaffected background RNA transcription) - suggesting a very specialized role for each of these proteins.
  • I think that quantitative FISH is a sweet way to show the localized expression of these piRNA transcripts - in WT flies, it is only present in the developing ovaries, but in Moonshiner-deficient mutants there is 10x less in situ hybridization fluorescence signal. They also used this FISH method to show that Moonshiner, Rhino, TRF2, and TFIIA-S are all required for sufficient piRNA transcription.
  • These same FISH techniques, however, show that Rhino must have other functional interactions beyond complex formation with Moonshiner. The authors showed that a number of piRNA genomic clusters have more-typical promotor sequences at their flanks, allowing for a more traditional format of transcription. These other clusters are somehow still dependent on Rhino for transcription, but are independent of Moonshiner.
  • They authors used a clever series of promoter deletions to show that Moonshiner-independent piRNA clusters can become Moonshiner dependent.
  • The authors next show that Moonshiner recruitment can increase the level of transcription in typically-dormant piRNA clusters of a different cell line (Schneider cells - a Drosophila cell line with macrophage-like behavior. Critically does not express endogenous Moonshiner). They used CRISPR-Cas9 to introduce ectopic Moonshiner expression in these cells to show that this protein is sufficient for initiation of piRNA transcription. As a corollary, ovaries deficient in TRF2 and TFIIA-S do not transcribe piRNAs despite endogenous expression of Moonshiner.
  • To show that you can bypass Moonshiner recruitment of TRF2, the authors used the sickest nanobody experiment. They introduced a Deadlock-GFP fusion protein, which is recruited to piRNA genomic sequence through Rhino interaction, as well as a TRF2-AntiGFP-nanobody fusion to Moonshiner deficient flies. This crazy bypass system produces flies with 90% of typical fertility - which is a marked rescue of the totally sterile phenotype of Moonshiner-deficient flies.

3. My only issue is that this paper doesn't do a good job of framing why this is interesting. I'm mostly just into characterizing proteins with unknown function, but it would be nice if the authors spent some time framing the purpose of this study.

Week 3 TZAP or not TZAP; Oct 17, 2017

Paper for this week:

I presented this week, so please reference my slides for my discussion!

Week 2 Discussion Questions/Topics: Oct 10, 2017

Paper for this week:

Points of discussion

1. My visceral reaction to this topic

  • I hate this topic.
  • My gut just screams, "I am a scientist! I want to create information and this is a waste of time! Who cares about whether some nerd will be able to find my data in a huge curated list!!"
  • Maureen 13:04, 10 October 2017 (PDT): I used to agree with this too and still have that reaction, if I am honest. A bit. BUT now that I'm on a project that actually attempts to ansr my biological Q but needs to use interoperable datasets, now I see the light.

2. Why I'm objectively wrong about hating it

  • ~Personal Annecdote~:
    • In my old job, we had a very systematic way of keeping track of everything (filenames, organization systems, etc).
    • Early on, I took it for granted that there was an established system in place which allowed our whole team to backtrack and troubleshoot or whatever, so for my first year I did not do a very good job of annotating datafiles and making sure it was easy to go back.
    • When it was time for me to leave the Broad (and come to OHSU), I went through my old experimental files and realized that I had in fact done a TERRIBLE job of keeping track of everything, and that I was the only one who would ever be able to navigate the mess I had created. Each experiment had well over a dozen associated documents and piles of different versions (ExptResults, ExptResults_Real, ExptResults_Real_Final, etc), and the naming system was effectively incoherent, and I couldn't figure out which experiment went with which mass spec datafile, and oh god it was a mess.
    • It took me weeks of painstakingly going through everything I had ever done to put everything back in order, but I'm still worried that that first year will cause headaches for a lot of people...
  • The short version is that I can easily imagine the splitting headaches every curator at every database feels whenever some new cocky author comes with a terribly annotated dataset.
  • The ability to reference, cite, and re-version data is quickly becoming super important for all scientists.
    • This is especially true for the life sciences, because the traditional form of experimentation was easily curated (eg, one series of experiments on one protein in one model system) is transitioning to a new big-data style (eg, hugely multiplexed and automated studies across hundreds or thousands of conditions; also meta-analysis!)

3. Getting to this paper

  • So while reading this paper makes me almost physically ill with impatience and dread, I appreciate that someone has compiled a resource that has actionable suggestions.
  • In the end, making your own work reference-able helps yourself as well as the larger scientific community. Your work literally doesn't matter if no one can find it.
  • I have personal feelings regarding Lesson 5 (AKA ~Personal Anecdote #2~):
    • I hate the protein family Septin. Fun fact: the shorthand gene symbols for Septins are always Sept1, Sept2, ... etc. And when you have a huge data table with thousands of rows and dozens of columns, and Excel automatically converts them to September 9 or whatever, and these freaking Septins keep breaking your computational/statistical tools because R then converts the date format to another totally different format, you come to hate them.
  • I also really like Lesson 8 (making the URI very obvious, so you don't just take the link in the browser address bar, aka what I literally always do because I have zero patience).

Week1 Discussion Questions/Topics: Oct 5, 2017

Papers for this week:

Factors attributing to debate?

1. Reductionist approach

  • Cancer is clearly an umbrella term for a highly complex and wide ranging series of diseases - it is unlikely that any single simple model will capture the intricacies of all subtypes of cancer across all populations.
    • However, the authors do try to state that they are not claiming to have found something new and groundbreaking - they simply hope to use statistics to show that there is a trend between cancer driver mutations and number of replications for any given tissue.

2. Validity of assumptions (related to reductionist approach)

  • Eg., Assumed that the referenced stem cell numbers and replication rates are representative of all global human populations.
  •  ? Other assumptions?
  • My impression was that they intentionally tried to choose cancer types that met a number of specific criteria:
    • Cancers in tissues with characterized stem cells
    • Cancers for which environmental and hereditary factors have been at least partially characterized
    • Cancers with a small number of driver mutations (ignoring cancers dependent on large network disturbances)
    • I have no idea why they did not include any subtypes of breast cancer, but I would hope that this exclusion was not a form of "data massaging"

3. Value of results and conclusion to society:

  • The conclusion of this paper is easily misinterpreted: the authors suggest that the difference between cancer risks in different tissue types is correlated to the number of stem cell divisions.
    • (They calculate that the influence of this correlation could contribute to up to 66% of the differences in risks for different tissue types.)
  • The authors might have overemphasized the randomness of cancer incidence, underemphasized value of prevention.
    • However, I do agree with their larger claim: there is only so much that preventative measures can do - the heart of cancer survivability must be rooted in early detection and treatment.
    • Even if there currently exists no therapeutic route, this is a factor that can change over the next several years. Proper screening should not only involve finding tumors early, but also a more-accurate diagnosis of malignancy (eg., no more excising lumps that never would have become malignant).
  • Also, widely misconstrued in public eye as suggesting 2/3 total incidents of cancer are attributable to "bad luck"
    • The author's conclusion may have been more effectively communicated to explain the difference.

Overall, I am unconvinced that the 2015 paper needed to be have a followup in 2017, nor am I convinced that the 2017 paper effectively addressed some of the points of debate from the first copy. However, I feel like this paper was intended to stir the research community to finding mechanistic causes that underly the differences between risks of cancer for different tissue types: The authors present a model and all models are meant to be tested, broken, and updated.
What we can currently take from this model is that we should:

a) continue doing basic research to understand mechanistic causes of cancer,
b) search for therapeutic options for a wider variety of cancers,
c) improve early detection protocols (which includes finding mechanisms of discrimination for id'ing malignant vs nonmalignant cancers),
d) understand that there are limitations to preventative measures.