From patchwork Mon Aug 12 01:56:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Golle X-Patchwork-Id: 1971406 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Date: Mon, 12 Aug 2024 02:56:41 +0100 From: Daniel Golle To: Miquel Raynal , Richard Weinberger , Vignesh Raghavendra , Tudor Ambarus , Daniel Golle , Mika Westerberg , Chia-Lin Kao , Martin Kurbanov , linux-mtd@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH] mtd: spinand: set bitflip_threshold to 75% of ECC strength Message-ID: <2117e387260b0a96f95b8e1652ff79e0e2d71d53.1723427450.git.daniel@makrotopia.org> MIME-Version: 1.0 Content-Disposition: inline X-BeenThere: linux-mtd@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: Linux MTD discussion mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-mtd" Errors-To: linux-mtd-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Reporting an unclean read from SPI-NAND only when the maximum number of correctable bitflip errors has been hit seems a bit late. UBI LEB scrubbing, which depends on the lower MTD device reporting correctable bitflips, then only kicks in when it's almost too late. Set bitflip_threshold to 75% of the ECC strength, which is also the default for raw NAND. Signed-off-by: Daniel Golle --- drivers/mtd/nand/spi/core.c | 1 + 1 file changed, 1 insertion(+) --- a/drivers/mtd/nand/spi/core.c +++ b/drivers/mtd/nand/spi/core.c @@ -1286,6 +1286,7 @@ static int spinand_init(struct spinand_d /* Propagate ECC information to mtd_info */ mtd->ecc_strength = nanddev_get_ecc_conf(nand)->strength; mtd->ecc_step_size = nanddev_get_ecc_conf(nand)->step_size; + mtd->bitflip_threshold = DIV_ROUND_UP(mtd->ecc_strength * 3, 4); ret = spinand_create_dirmaps(spinand); if (ret) {