This article about applying harm reduction to your secure use of the Internet has been going around. I can't share it in good conscience without adding a few things to it. I work for Google, but the following is my personal opinion.
If you're concerned about your data being collected (and I understand that you may be concerned about Google retaining your data not because you think Google will use it inappropriately, but because you fear that the federal government will require them to surrender it), use Chrome without being logged in. People disagree on how safe Tor really is, but my odds are on "not." If you don't have the level of technical expertise necessary to read the source code for yourself, you probably shouldn't be risking your life on it. It doesn't guarantee full anonymity. The reasons why are fairly complicated, which is a good sign you might want to avoid being lulled into a false sense of security.
For email, I wouldn't really recommend riseup. The author alludes to this, but: any widely used anarchist/radical site has been compromised already. Having a low volume of data makes you an easier target.
A friend I trust has confirmed that Signal is trustworthy. I agree with this article that regular SMS is not secure.
Passwords: use a password manager, turn on 2-factor wherever you can. Pretty much what they say.
Google: Don't log in when searching if you're worried (use multiple browser windows). As an insider, I can say Google takes user trust and privacy extremely seriously. I can't share everything that backs up that belief, but I will vouch for them.
It was pointed out to me that: "Turning off geolocation on a cell phone doesn't do much; the government can and will subpoena cell phone tower records which provide enough geolocation information."
If you would like to see how Google works with government requests for data, watch this official video on how Google responds to search warrants.
I don't trust Duck Duck Go any further than I can throw them, honestly. I would say the same thing about any other small service. They may be trying to do the right thing, but there are lots and lots of ways to retain more data than you intend to, and it takes a huge amount of human resources to not do that.
tl;dr: Only big companies have the resources to actually protect your privacy. Whether they want to do that is a different story. I'm confident that Google does want to do that, because without user trust, Google has no business.
Pretty much nothing is resistant to the government coercing you or your friend with the email server or Google into giving up data, because coercion is how the government works.
Use non-discoverable media when possible. Talk in person.
Whatever you're doing, think about what security people call your "threat model": what are you trying to defend against? What concrete risks do you face if your data gets into the wrong hands? What are the benefits of using a communication mechanism that's subject to surveillance? An example of threat modeling is your bicycle lock: if you have a nice bike and you ride in a major city, you might want to carry a heavy-duty Kryptonite U-lock at all times, plus extra locks for the wheels. That's because you can infer, based on information that you have, that your bike is attractive to thieves, there are many thieves, and they will try hard to steal your bike. If you have a rusty bike and live in a small town, you might be OK with a cable lock because the benefit of not having several pounds of metal to carry around outweighs the risk of theft, and a good U-lock costs more than your bike did. You can think about analogous trade-offs as they apply to your use of networked communication technologies.
This is one post where it's perfectly fine to well-actually me if you have security or systems expertise.
If you're concerned about your data being collected (and I understand that you may be concerned about Google retaining your data not because you think Google will use it inappropriately, but because you fear that the federal government will require them to surrender it), use Chrome without being logged in. People disagree on how safe Tor really is, but my odds are on "not." If you don't have the level of technical expertise necessary to read the source code for yourself, you probably shouldn't be risking your life on it. It doesn't guarantee full anonymity. The reasons why are fairly complicated, which is a good sign you might want to avoid being lulled into a false sense of security.
For email, I wouldn't really recommend riseup. The author alludes to this, but: any widely used anarchist/radical site has been compromised already. Having a low volume of data makes you an easier target.
A friend I trust has confirmed that Signal is trustworthy. I agree with this article that regular SMS is not secure.
Passwords: use a password manager, turn on 2-factor wherever you can. Pretty much what they say.
Google: Don't log in when searching if you're worried (use multiple browser windows). As an insider, I can say Google takes user trust and privacy extremely seriously. I can't share everything that backs up that belief, but I will vouch for them.
It was pointed out to me that: "Turning off geolocation on a cell phone doesn't do much; the government can and will subpoena cell phone tower records which provide enough geolocation information."
If you would like to see how Google works with government requests for data, watch this official video on how Google responds to search warrants.
I don't trust Duck Duck Go any further than I can throw them, honestly. I would say the same thing about any other small service. They may be trying to do the right thing, but there are lots and lots of ways to retain more data than you intend to, and it takes a huge amount of human resources to not do that.
tl;dr: Only big companies have the resources to actually protect your privacy. Whether they want to do that is a different story. I'm confident that Google does want to do that, because without user trust, Google has no business.
Pretty much nothing is resistant to the government coercing you or your friend with the email server or Google into giving up data, because coercion is how the government works.
Use non-discoverable media when possible. Talk in person.
Whatever you're doing, think about what security people call your "threat model": what are you trying to defend against? What concrete risks do you face if your data gets into the wrong hands? What are the benefits of using a communication mechanism that's subject to surveillance? An example of threat modeling is your bicycle lock: if you have a nice bike and you ride in a major city, you might want to carry a heavy-duty Kryptonite U-lock at all times, plus extra locks for the wheels. That's because you can infer, based on information that you have, that your bike is attractive to thieves, there are many thieves, and they will try hard to steal your bike. If you have a rusty bike and live in a small town, you might be OK with a cable lock because the benefit of not having several pounds of metal to carry around outweighs the risk of theft, and a good U-lock costs more than your bike did. You can think about analogous trade-offs as they apply to your use of networked communication technologies.
This is one post where it's perfectly fine to well-actually me if you have security or systems expertise.