Rugpulls and honeypots thrive where people skip basic verification. Most “token cannot sell” incidents are not zero-day bugs, they’re permission flags, stealth taxes, or blacklist checks that scammers bury in the contract. The easiest edge in crypto is simply crypto token checker caring more than the attacker expects. Community signals and GitHub receipts will save you from a lot of bad trades.
Below is a practical workflow I use when evaluating a token, with a focus on community verification and GitHub evidence, plus the on-chain checks that tie the story together.
Start with the endgame: can you actually sell?
If a token is a honeypot, you can buy, but any sell fails or gets taxed into dust. Honeypotters often gate selling through one of three routes: a hard revert in transfer or _transfer, a blacklist or external hook that rejects sells to the liquidity pair, or a dynamic sell tax that spikes to near 100 percent at the time of exit.
You can feel this on-chain before you feel it in your PnL. On Etherscan or BscScan, jump to the token’s page, then the Token Transfers tab. If buys cluster heavily but sells are rare or fail with out-of-gas or custom errors, that is your first tripwire. Tools marketed as a honeypot token checker or safe token scanner can catch obvious traps, but they miss edge-case logic hidden behind owner-controlled flags. Treat them as a first pass, not a green light.
On DexScreener, look at the pair page. A flat price with increasing buys and almost no sells smells like a honeypot crypto setup. If sells exist but every sell is tiny and slippage is huge, check for automatic market maker (AMM) fee manipulation or a transfer tax that scales with amount.

Community verification beats shiny PDFs
Slick sites and “audits” that no one recognizes are cosmetic. The crypto community is brutally efficient at flagging bad actors. I watch a handful of on-chain sleuths on X, and I scan comment sections. If reputable accounts have already called it a honeypot, walk away. Security firms like PeckShield, CertiK, Hacken, and ConsenSys Security frequently post alerts about active rugs and stealth mints. You rarely get a second chance to be early and safe at the same time.
I also check the project’s Telegram and Discord. Are admins deleting hard questions about liquidity or contract ownership? Are bots replying faster than humans? Are “partnership” posts recycled or low-effort? That social layer often exposes more than a PDF ever will.
Quick on-chain triage that fits in five minutes
Here is a tight pre-trade workflow that catches a big percentage of scams before you spend gas:
- Pull the token on Etherscan or BscScan, confirm the contract is verified and the source matches the deployed bytecode, not just a “partial match.” Open Read/Write Contract. Look for owner-only toggles like setFeeTx, setTax, setMaxTx, setMaxWallet, setTradingEnabled, setSwapEnabled, setBlacklist, or generic setParameters. Presence is not guilt, but combined with active ownership and unlocked liquidity, it is dangerous. Check Ownership: has the team used renounceOwnership or do they rely on role-based control like AccessControl? If ownership is live, who holds it? A fresh EOAs with no history is a red flag. Inspect Liquidity: go to the pair on DexScreener. Is LP locked with a reputable locker, and for how long? If the deployer or owner still holds the LP tokens, the rug pull switch is live. Look at holders and recent mints. If mint or transferFrom fed massive supply to a dev wallet, or the top holders cluster among a few fresh addresses, expect coordinated exit liquidity behavior.
Those steps are not perfection, they are speed. If any item fails plainly, I stop there.
How honeypots hide in plain sight
The code usually cheats in one of a few places:
- In _transfer, the contract inserts conditions like require(tradingEnabled || isWhitelisted[sender]). That can trap new buyers while insiders sell. If tradingEnabled is owner-controlled, it can flip mid-pump. Blacklist mechanics sneak in with mappings like isBot[account] or isBlacklisted[account], plus functions addBot, blockAddress, or setBlacklist. Scammers often rename them to setSpecial, setTrusted, or setExcluded to look harmless. Fee logic hides under setFeeTx, setSellFee, setBuyFee, or a generic setTax that can reach 100 percent. Look for separate fees on sells versus buys. If the owner can update these without caps, they can zero out sellers on command. Transfer limits and cool-downs use maxTxAmount, maxWallet, and lastTradeBlock style checks. These sometimes only apply to sells or to transfers to the pair address. “Anti-bot” hooks such as preTransferCheck, onPreTransferCheck, or _beforeTokenTransfer can block sells for non-whitelisted accounts under the guise of bot defense.
If you see onlyOwner on any function that meaningfully controls trading dynamics, assume it will be used.
GitHub verification that actually means something
Good teams leave a footprint. Even if the contract is small, a healthy repo tells you the code is not a one-night fork.
Look for a public GitHub that links from the official site or X profile. Names should match, and the commit history should predate the token launch, not appear minutes before deployment. Repos that mirror real development usually include a build system like Hardhat or Foundry, tests, and a configuration for linters or analyzers such as Slither.
Use this as your GitHub sanity checklist:
- History and contributors: multiple contributors, meaningful commit messages, and issues closed over time. A repo with one contributor and 2 commits on launch day is weak. Tags and releases: semantic versions, release notes, and a changelog. Random zip files or single “final” commits are suspect. Tests and CI: presence of unit tests for fee logic, mint limits, and access control. A simple GitHub Actions pipeline running tests or Slither is a green flag. Dependencies: use of well-known libraries like OpenZeppelin. Custom re-implementations of ERC20 tend to introduce footguns or backdoors. License and audit artifacts: a real license file and any published audits from reputable firms. If they tout “audit complete,” the PDF should name auditors like CertiK, PeckShield, Hacken, or ConsenSys, not a brand-new shell.
If you cannot tie the deployed bytecode to the code in the repo, treat the repo as marketing copy, not a source of truth.
Prove the deployed contract matches the repo
This is the part most people skip, and it is where scams hide. Even with a pretty GitHub, you need at least a lightweight match to the on-chain contract.
On Etherscan or BscScan, open the contract and check the verification badge. Click “Code” and look for the compiler version, optimization settings, and external libraries. If the explorer says “exact match,” the source corresponds to the deployed bytecode. If it says “partial match,” there could be linked libraries or metadata mismatches, but it can also be a decoy.
Grab the source from GitHub, compile with the same settings, and compare the bytecode locally if you can. For upgradeable contracts, inspect the proxy’s admin and implementation slots. A standard OpenZeppelin TransparentUpgradeableProxy exposes admin() and implementation() through the proxy’s admin interface on the explorer. If the admin is a personal EOA, upgrades are one wallet compromise away from disaster. A timelocked multisig is safer, though nothing is perfect.
If the project claims “ownership renounced,” verify that on-chain through the owner() read call. For proxies, renouncing ownership of the implementation does not remove control, because the proxy admin can still point to a new logic contract. Many retail buyers miss that nuance.
LP behavior that correlates with scams
The fastest rugs often happen where LP is unlocked or held by the deployer. If LP is locked, verify the locker contract and lock duration. Locks for a few days during a hyped presale are meaningless. Serious teams prefer multi-month or multi-year locks, or they burn LP tokens outright. On DexScreener, sudden supply changes or swaps routed through multiple pairs can reveal stealth tax collection wallets, commonly called “marketing” or “liquidity” in code.
Watch for sneaky liquidity zaps. Contracts with swapBack, addLiquidity, or autoLiquify functions can dump accrued taxes into the pool at bad times. That does not make a token a scam by itself, but if the owner can change the tax inputs without limits, it is a control vector.
ERC20 specifics that cause “token cannot sell”
Many “token cannot sell” complaints boil down to a few code paths:
- transfer or _transfer blocks transfers to the pair: checks like if (to == pair) require(allowedToSell[sender]). Blacklist per address: require(!isBlacklisted[sender] && !isBlacklisted[recipient]). Pausability toggles: an onlyOwner pause() that halts trading while insiders exit. Maximum sell or wallet checks: require(amount <= maxTxAmount) or require(balanceOf(recipient) + amount <= maxWallet), often asymmetric so sells fail more than buys. Dynamic fees: taxes that spike on sell path, identified by branches like if (to == pair) tax = sellTax; where sellTax is owner-settable up to 100 percent. </ul> When combined with centralized ownership and unlocked LP, these mechanisms are the scaffolding for a honeypot. Reading social proof the right way The market loves vanity metrics. Ignore follower counts. Look for credible accounts interacting with the project, engineers answering hard questions, and technical threads that reference code lines, function names, or specific Etherscan transactions. Community curators on platforms like X frequently flag bad deployers by address, not just by name. If you see the same deployer address across multiple rugs, you already have your verdict. CoinGecko listings and CEX announcements do not mean “safe.” I have audited tokens that had early listings yet shipped with wide-open mint permissions. Treat exchange presence as distribution, not validation. When a smart contract audit helps, and when it does not A real audit is a risk reduction, not a shield. Reputable firms like ConsenSys Diligence, PeckShield, CertiK, and Hacken publish clear findings and remediation status. Read the report. If critical issues were left unresolved, or if the audit covers a different commit than the deployed one, your assumptions are wrong. Also check scope. If the audit ignored the upgrade proxy or tokenomics contracts that drive fee logic, it may miss the very path that enables a rug. One underrated signal: see if the team engaged multiple rounds or firms, or if they integrated static analysis (Slither), fuzzing (Foundry’s forge fuzz), and differential tests. Teams that test edge cases like fee updates mid-block or blacklist removal during peak volume usually think beyond the marketing deck. A cautious way to test live selling, if you must As a last resort, I sometimes buy tiny, then test a sell immediately with high slippage and realistic gas. Use a burner wallet. If the transaction reverts, read the revert string or event logs on Etherscan. Functions with custom errors will show hints like TradingNotEnabled() or Blacklisted(). If it sells but taxes are wildly higher than advertised, the fee path is lying. I do this rarely, because good desk work usually answers the question before I spend gas. Pulling it together: a practical flow If I were starting from zero on a new token:
- Check Etherscan or BscScan for verified code, owner status, proxy presence, and obvious sell-gating logic. Inspect DexScreener for organic sells, LP lock, and tax behavior visible in swap deltas. Scan X for credible warnings from on-chain sleuths and security firms, not influencer hype. Confirm a real GitHub with history, tests, and code that matches the deployed bytecode and compiler settings. Treat centralized control over fees, blacklist, and trading toggles as a deal-breaker unless there is strong governance, timelocks, and a track record.