This happens to me occasionally that I get interested in a cute little problem, (e.g. graph reconstruction) think about it for a while, come up with some partial results, and then poke at the literature and quickly go ohhhhh shittttt people have really studied this, and a single section of their papers has way more insight than I could expect to generate if I slogged away at the problem for a month.
Which is okay, since I'm not going to slog away at it for a month, but rather go back to working on problems I can make headway on. But it's still fun to play with such things, and a good sort of exercise to wrestle with new problem domains.
I think all it takes to go off the deep end into kooksville is to forget this last step of being at least a token amount of welcoming to other people's ideas.
Anyway, case in point (of other people doing awesome work, not of people being kooks) is David Rivshin's master's thesis which has some algorithmic cleverness that let him computationally determine a whole ton of statistics on graphs that are sort of "trying to be counterexamples" to the reconstruction conjecture. In particular on page 19 there are a few pairs of graphs that have 7 "cards" in their "deck" (i.e. vertex deletions) in common! I managed using some dumb perl scripts to find a pair of 8-vertex graphs with 6 in common, and determined that there were no 9-vertex graph pairs with 7 in common but after that bumped up against the fact that there are like 12 million 10-veretx graphs, and finding pairs of things in a 12 million member set that satisfy some property sounds very 72 trillionish, and that's no good for anybody.
Also, Bowler, Brown, Fenner's "Families of Pairs of Graphs with a Large Number of Common Cards" [sorry, I can't find a copy not behind a !@#$ paywall] has a nice construction on page 16 that yields as many overlaps as you want; if you want n nonisomorphic co-occurring vertex deletions, you need a pair of graphs of (asymptotically) 5n vertices. (note the paper says effectively 5n/2, because it's thinking about multisets of vertex deletions, whereas I'm thinking about sets, and the deletions come in isomorphic pairs) You basically take a line graph with a pair of dangling degree-1 vertices hanging off it every three vertices:
o o o | | | o--o--o--o--o--o--o--o | | | o o o
and the other graph is a ring graph with, again, a pair of danglies every third vertex of the ring except exactly one of the danglies (just one, globally) is missing. This is harder to ASCII-ize, but imagine:
o o | | ...-o--o--o--o--o--o--o--o-... | | | o o o
where those outer two half-edges are supposed to wrap around and connect. The way this generates lots of different common vertex deletions is this: imagine cutting the ring graph on one of the ring vertices without bits dangling off of it. This creates a line graph with one danglie-thing missing somewhere, but all of the somewheres are non-isomorphic to one another, and the original line graph can be made isomorphic to this one by chopping off exactly one dangling vertex.
There's something I really like about both of these works that seems ubiquitous and useful in dealing with really hard conjectures about the existence of mathematical objects --- that is, looking for a quantitative sliding scale at the one end of which the conjectural thing might or might not exist. The win is that (at least in this case) you can write programs to hunt around for the almost-counterexamples, the close calls, the weird outliers, and hopefully learn something from them. Looks as if graph reconstruction is still quite tough after all this data has come pouring in, but it's neat to think about.