{"id":385,"date":"2026-02-09T02:26:09","date_gmt":"2026-02-08T18:26:09","guid":{"rendered":"https:\/\/connectword.dpdns.org\/?p=385"},"modified":"2026-02-09T02:26:09","modified_gmt":"2026-02-08T18:26:09","slug":"bytedance-releases-protenix-v1-a-new-open-source-model-achieving-af3-level-performance-in-biomolecular-structure-prediction","status":"publish","type":"post","link":"https:\/\/connectword.dpdns.org\/?p=385","title":{"rendered":"ByteDance Releases Protenix-v1: A New Open-Source Model Achieving AF3-Level Performance in Biomolecular Structure Prediction"},"content":{"rendered":"<p>How close can an open model get to AlphaFold3-level accuracy when it matches training data, model scale and inference budget? ByteDance has introduced <strong>Protenix-v1<\/strong>, a comprehensive <strong>AlphaFold3 (AF3) reproduction<\/strong> for biomolecular structure prediction, released with <strong>code and model parameters under Apache 2.0<\/strong>. The model targets <strong>AF3-level performance<\/strong> across protein, DNA, RNA and ligand structures while keeping the entire stack open and extensible for research and production.<\/p>\n<p>The core release also ships with <strong>PXMeter v1.0.0<\/strong>, an evaluation toolkit and dataset suite for <strong>transparent benchmarking on more than 6k complexes<\/strong> with <strong>time-split and domain-specific subsets<\/strong>.<\/p>\n<h3 class=\"wp-block-heading\"><strong>What is Protenix-v1?<\/strong><\/h3>\n<p>Protenix is described as <strong>\u2018Protenix: Protein + X<\/strong>\u2018, a foundation model for <strong>high-accuracy biomolecular structure prediction<\/strong>. It predicts <strong>all-atom 3D structures for complexes that can include:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li>Proteins<\/li>\n<li>Nucleic acids (DNA and RNA)<\/li>\n<li>Small-molecule ligands<\/li>\n<\/ul>\n<p>The research team defines Protenix as a <strong>comprehensive AF3 reproduction<\/strong>. It re-implements the AF3-style diffusion architecture for all-atom complexes and exposes it in a trainable PyTorch codebase.<\/p>\n<p><strong>The project is released as a full stack:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li>Training and inference code<\/li>\n<li>Pre-trained model weights<\/li>\n<li>Data and MSA pipelines<\/li>\n<li>A browser-based <strong>Protenix Web Server<\/strong> for interactive use<\/li>\n<\/ul>\n<h3 class=\"wp-block-heading\"><strong>AF3-level performance under matched constraints<\/strong><\/h3>\n<p>As per the research team <strong>Protenix-v1 (protenix_base_default_v1.0.0)<\/strong> is <strong>\u2018the first fully open-source model that outperforms AlphaFold3 across diverse benchmark sets while adhering to the same training data cutoff, model scale, and inference budget as AlphaFold3.<\/strong>\u2018<\/p>\n<p><strong>The important constraints are:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li><strong>Training data cutoff<\/strong>: 2021-09-30, aligned with AF3\u2019s PDB cutoff.<\/li>\n<li><strong>Model scale<\/strong>: Protenix-v1 itself has <strong>368M parameters<\/strong>; AF3 scale is matched but not disclosed.<\/li>\n<li><strong>Inference budget<\/strong>: comparisons use similar sampling budgets and runtime constraints.<\/li>\n<\/ul>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1730\" height=\"638\" data-attachment-id=\"77801\" data-permalink=\"https:\/\/www.marktechpost.com\/2026\/02\/08\/bytedance-releases-protenix-v1-a-new-open-source-model-achieving-af3-level-performance-in-biomolecular-structure-prediction\/screenshot-2026-02-08-at-10-08-28-am-2\/\" data-orig-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-08-at-10.08.28-AM-1.png\" data-orig-size=\"1730,638\" data-comments-opened=\"1\" data-image-meta='{\"aperture\":\"0\",\"credit\":\"\",\"camera\":\"\",\"caption\":\"\",\"created_timestamp\":\"0\",\"copyright\":\"\",\"focal_length\":\"0\",\"iso\":\"0\",\"shutter_speed\":\"0\",\"title\":\"\",\"orientation\":\"0\"}' data-image-title=\"Screenshot 2026-02-08 at 10.08.28\u202fAM\" data-image-description=\"\" data-image-caption=\"\" data-medium-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-08-at-10.08.28-AM-1-300x111.png\" data-large-file=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-08-at-10.08.28-AM-1-1024x378.png\" src=\"https:\/\/www.marktechpost.com\/wp-content\/uploads\/2026\/02\/Screenshot-2026-02-08-at-10.08.28-AM-1.png\" alt=\"\" class=\"wp-image-77801\" \/><figcaption class=\"wp-element-caption\">https:\/\/github.com\/bytedance\/Protenix<\/figcaption><\/figure>\n<\/div>\n<p>On challenging targets such as <strong>antigen\u2013antibody complexes<\/strong>, increasing the <strong>number of sampled candidates<\/strong> from several to hundreds yields <strong>consistent log-linear improvements in accuracy<\/strong>. This gives a clear and documented <strong>inference-time scaling behavior<\/strong> rather than a single fixed operating point.<\/p>\n<h3 class=\"wp-block-heading\"><strong>PXMeter v1.0.0: Evaluation for 6k+ complexes<\/strong><\/h3>\n<p>To support these claims, the research team released <strong>PXMeter v1.0.0<\/strong>, an open-source toolkit for <strong>reproducible structure prediction benchmarks<\/strong>.<\/p>\n<p><strong>PXMeter provides:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li>A <strong>manually curated benchmark dataset<\/strong>, with non-biological artifacts and problematic entries removed<\/li>\n<li><strong>Time-split and domain-specific subsets<\/strong> (for example, antibody\u2013antigen, protein\u2013RNA, ligand complexes)<\/li>\n<li>A <strong>unified evaluation framework<\/strong> that computes metrics such as complex LDDT and DockQ across models<\/li>\n<\/ul>\n<p>The associated PXMeter research paper, <em>\u2018<a href=\"https:\/\/www.biorxiv.org\/content\/10.1101\/2025.07.17.664878v1\" target=\"_blank\" rel=\"noreferrer noopener\">Revisiting Structure Prediction Benchmarks with PXMeter<\/a>,<\/em>\u2018 evaluates <strong>Protenix, AlphaFold3, Boltz-1 and Chai-1<\/strong> on the same curated tasks, and shows how different dataset designs affect model ranking and perceived performance.<\/p>\n<h3 class=\"wp-block-heading\"><strong>How Protenix fits into the broader stack<\/strong>?<\/h3>\n<p><strong>Protenix is part of a small ecosystem of related projects:<\/strong><\/p>\n<ul class=\"wp-block-list\">\n<li><strong>PXDesign<\/strong>: a binder design suite built on the Protenix foundation model. It reports <strong>20\u201373% experimental hit rates<\/strong> and <strong>2\u20136\u00d7 higher success<\/strong> than methods such as AlphaProteo and RFdiffusion, and is accessible via the Protenix Server.<\/li>\n<li><strong>Protenix-Dock<\/strong>: a <strong>classical protein\u2013ligand docking framework<\/strong> that uses empirical scoring functions rather than deep nets, tuned for rigid docking tasks.<\/li>\n<li><strong>Protenix-Mini<\/strong> and follow-on work such as <strong>Protenix-Mini+<\/strong>: lightweight variants that reduce inference cost using architectural compression and few-step diffusion samplers, while keeping accuracy within a few percent of the full model on standard benchmarks.<\/li>\n<\/ul>\n<p>Together, these components cover structure prediction, docking, and design, and share interfaces and formats, which simplifies integration into downstream pipelines.<\/p>\n<h3 class=\"wp-block-heading\"><strong>Key Takeaways<\/strong><\/h3>\n<ul class=\"wp-block-list\">\n<li><strong>AF3-class, fully open model<\/strong>: Protenix-v1 is an AF3-style all-atom biomolecular structure predictor with open code and weights under Apache 2.0, targeting proteins, DNA, RNA and ligands.<\/li>\n<li><strong>Strict AF3 alignment for fair comparison<\/strong>: Protenix-v1 matches AlphaFold3 on critical axes: training data cutoff (2021-09-30), model scale class and comparable inference budget, enabling fair AF3-level performance claims.<\/li>\n<li><strong>Transparent benchmarking with PXMeter v1.0.0<\/strong>: PXMeter provides a curated benchmark suite over 6k+ complexes with time-split and domain-specific subsets plus unified metrics (for example, complex LDDT, DockQ) for reproducible evaluation.<\/li>\n<li><strong>Verified inference-time scaling behavior<\/strong>: Protenix-v1 shows log-linear accuracy gains as the number of sampled candidates increases, giving a documented latency\u2013accuracy trade-off rather than a single fixed operating point.<\/li>\n<\/ul>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p>Check out the\u00a0<strong><a href=\"https:\/\/github.com\/bytedance\/Protenix\" target=\"_blank\" rel=\"noreferrer noopener\">Repo<\/a> and <a href=\"https:\/\/protenix-server.com\/login\" target=\"_blank\" rel=\"noreferrer noopener\">Try it here<\/a>.<\/strong>\u00a0Also,\u00a0feel free to follow us on\u00a0<strong><a href=\"https:\/\/x.com\/intent\/follow?screen_name=marktechpost\" target=\"_blank\" rel=\"noreferrer noopener\"><mark>Twitter<\/mark><\/a><\/strong>\u00a0and don\u2019t forget to join our\u00a0<strong><a href=\"https:\/\/www.reddit.com\/r\/machinelearningnews\/\" target=\"_blank\" rel=\"noreferrer noopener\">100k+ ML SubReddit<\/a><\/strong>\u00a0and Subscribe to\u00a0<strong><a href=\"https:\/\/www.aidevsignals.com\/\" target=\"_blank\" rel=\"noreferrer noopener\">our Newsletter<\/a><\/strong>. Wait! are you on telegram?\u00a0<strong><a href=\"https:\/\/t.me\/machinelearningresearchnews\" target=\"_blank\" rel=\"noreferrer noopener\">now you can join us on telegram as well.<\/a><\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.marktechpost.com\/2026\/02\/08\/bytedance-releases-protenix-v1-a-new-open-source-model-achieving-af3-level-performance-in-biomolecular-structure-prediction\/\">ByteDance Releases Protenix-v1: A New Open-Source Model Achieving AF3-Level Performance in Biomolecular Structure Prediction<\/a> appeared first on <a href=\"https:\/\/www.marktechpost.com\/\">MarkTechPost<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>How close can an open model ge&hellip;<\/p>\n","protected":false},"author":1,"featured_media":386,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-385","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/385","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=385"}],"version-history":[{"count":0,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/posts\/385\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=\/wp\/v2\/media\/386"}],"wp:attachment":[{"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=385"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=385"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/connectword.dpdns.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=385"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}