Actually this is one of the main problems you have when analyzing scRNA-seq data, and there is no established method for dealing with this. Different (dedicated) algorithms deal with it in different ways, but mostly you rely on how good the error modelling of your software is (a great read is the review by Wagner, Regev & Yosef, esp. the section on "False negatives and overamplification"). There are a couple of options:
- You can impute values, i.e. fill in the gaps on technical zeros. CIDR and scImpute do it directly. MAGIC and ZIFA project cells into a lower-dimensional space and use their similarity there to decide how to fill in the blanks.
- Some people straight up exclude genes that are expressed in very low numbers. I can't give you citations off the top of my head, but many trajectory inference algorithms like monocle2 and SLICER have heuristics to choose informative genes for their analysis.
- If the method you use for analysis doesn't model gene expression explicitly but uses some other distance method to quantify similarity between cells (like cosine distance, euclidean distance, correlation), then the noise introduced by dropout can be covered by the signal of genes that are highly expressed. Note that this is dangerous, as genes that are highly expressed are not necessarily informative.
- ERCC spike ins can help you reduce technical noise, but I am not familiar with the Chromium protocol so maybe it doesn't apply there (?)
since we are speaking about noise, you might consider using a protocol with unique molecular identifiers. They remove the amplification errors almost completely, at least for the transcripts that you capture...
EDIT: Also, I would highly recommend using something more advanced than PCA to do the analysis. Software like the above-mentioned Monocle or destiny is easy to operate and increases the power of your analysis considerably.