That's not what I actually meant. I'll try to say why in more detail:
1) Everybody uses some tools. Some testers use more tools, some less, but every single one of them uses some tools. For example, testing web applications without using http proxies is hardly possible. Perhaps it's more accurate to say that some testers also code and some don't.
2) What I said in 1) regarding tooling actually goes for every single role in IT. Take developers as an example. Do they use tools? Yes. Do they work with them manually? Yes, they use a keyboard and mouse to interact with their computer and work with the tools. Do we call them manual developers? No. So why do we point out that "manual" part in testing work then? It makes no sense to me. Take actually any other work. Doctors for example. Is it manual work? They use a lot of tools manually, so perhaps we should have manual doctors, and automation doctors. Who would they be?
3) "Manual testing" is not doing the same as a script would do. People are amazing because they have an ability to notice things even when they are not looking for them. Many bugs I have found have gone like this - I was looking for some other piece of information but just happened to notice some other bug. How do you want to do this with a script that does exactly the same every time it's executed?
4) I'd rather use the term attended testing, not manual.
5) "Ask anyone who has to spend 2 weeks of doing regression for a monthly release." People usually mention this. But there's actually more to it than just testing: 1) Monthly release... unless it's some specific context, there's no reason why it shouldn't be an hourly release cycle. You can't fix the whole development process by focusing only on testing. 2) Regression testing... it's an important part, but it has some limits. Such tests tend to bring in less and less useful information. So unless someone goes and changes them once in a while and makes them more harsh, they become pretty useless pretty fast. And talking about "making them harsher", we are back to some exploration, not automation. 3) If someone really just follows some bullet points for 2 weeks and calls it regression testing, that's obviously wrong. But you can't substitute incompetent testers with automation. Such testers would only create useless automation scripts that do not focus on important risks or don't bring in useful information. 4) Do they have to? I think they choose to. Or maybe they don't know any better. But we are back at point 3)
6) Every testing is exploratory to some degree. I go, explore some risks (here I can already use tons of tools, or do some coding), communicate my findings. That's testing. Then I might go and code something that will cover some part of my information gathering work. That's development (or possibly the "automation" part as people tend to call it).
7) Coding is not a substitution of testing work, it's an extension. Can I use tools or coding to make myself a more powerful tester? Cool, let's do it. If I can't, I won't do it just because I can.
8) "Faster time to market with tools". Speed is one dimension. Not the most important one in many industries and contexts. Yes, tools can speed things up. Sometimes they can also slow things down. It really depends, it's not a rule. There're also a lot of costs associated with tools: purchase cost (not always), operation cost, creation cost (someone has to create automation tests, these people tend to ask for more money), knowledge transfer cost, troubleshooting cost (e.g. our automation suite fails, so someone spends half a day finding out what happened), training cost, opportunity cost, and possibly some other cost. Depending on the context, it might very well turn out that using tools is a waste of money, e.g. on tests that will be run only once (or a small number of times).
So, to me, using or not using tools is highly context dependent. I think that's what I meant by the sentence you highlighted. I hope I made it a bit more clear.