default search action
Taiji Suzuki
Person information
- affiliation: Tokyo Institute of Technology, Department of Mathematical and Computing Sciences
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c105]Kazusato Oko, Yujin Song, Taiji Suzuki, Denny Wu:
Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations. COLT 2024: 4009-4081 - [c104]Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Atsushi Nitanda, Taiji Suzuki:
Koopman-based generalization bound: New aspect for full-rank weights. ICLR 2024 - [c103]Wei Huang, Ye Shi, Zhongyi Cai, Taiji Suzuki:
Understanding Convergence and Generalization in Federated Learning through Feature Learning Theory. ICLR 2024 - [c102]Juno Kim, Kakei Yamamoto, Kazusato Oko, Zhuoran Yang, Taiji Suzuki:
Symmetric Mean-field Langevin Dynamics for Distributional Minimax Problems. ICLR 2024 - [c101]Yuto Nishimura, Taiji Suzuki:
Minimax optimality of convolutional neural networks for infinite dimensional input-output problems and separation from kernel methods. ICLR 2024 - [c100]Atsushi Nitanda, Kazusato Oko, Taiji Suzuki, Denny Wu:
Improved statistical and computational complexity of the mean-field Langevin dynamics under structured data. ICLR 2024 - [c99]Keita Suzuki, Taiji Suzuki:
Optimal criterion for feature learning of two-layer linear neural network in high dimensional interpolation regime. ICLR 2024 - [c98]Dake Bu, Wei Huang, Taiji Suzuki, Ji Cheng, Qingfu Zhang, Zhiqiang Xu, Hau-San Wong:
Provably Neural Active Learning Succeeds via Prioritizing Perplexing Samples. ICML 2024 - [c97]Yihang Chen, Fanghui Liu, Taiji Suzuki, Volkan Cevher:
High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization. ICML 2024 - [c96]Juno Kim, Taiji Suzuki:
Transformers Learn Nonlinear Features In Context: Nonconvex Mean-field Dynamics on the Attention Landscape. ICML 2024 - [c95]Kazusato Oko, Shunta Akiyama, Denny Wu, Tomoya Murata, Taiji Suzuki:
SILVER: Single-loop variance reduction and application to federated learning. ICML 2024 - [c94]Rom N. Parnichkun, Stefano Massaroli, Alessandro Moro, Jimmy T. H. Smith, Ramin M. Hasani, Mathias Lechner, Qi An, Christopher Ré, Hajime Asama, Stefano Ermon, Taiji Suzuki, Michael Poli, Atsushi Yamashita:
State-Free Inference of State-Space Models: The *Transfer Function* Approach. ICML 2024 - [c93]Michael Poli, Armin W. Thomas, Eric Nguyen, Pragaash Ponnusamy, Björn Deiseroth, Kristian Kersting, Taiji Suzuki, Brian Hie, Stefano Ermon, Christopher Ré, Ce Zhang, Stefano Massaroli:
Mechanistic Design and Scaling of Hybrid Architectures. ICML 2024 - [c92]Michael Eli Sander, Raja Giryes, Taiji Suzuki, Mathieu Blondel, Gabriel Peyré:
How do Transformers Perform In-Context Autoregressive Learning ? ICML 2024 - [c91]Shokichi Takakura, Taiji Suzuki:
Mean-field Analysis on Two-layer Neural Networks from a Kernel Perspective. ICML 2024 - [c90]Kakei Yamamoto, Kazusato Oko, Zhuoran Yang, Taiji Suzuki:
Mean Field Langevin Actor-Critic: Faster Convergence and Global Optimality beyond Lazy Learning. ICML 2024 - [c89]Kazuki Uematsu, Kosuke Haruki, Taiji Suzuki, Mitsuhiro Kimura, Takahiro Takimoto, Hideyuki Nakagawa:
Dimensionality-Induced Information Loss of Outliers in Deep Neural Networks. ECML/PKDD (1) 2024: 144-160 - [i85]Juno Kim, Taiji Suzuki:
Transformers Learn Nonlinear Features In Context: Nonconvex Mean-field Dynamics on the Attention Landscape. CoRR abs/2402.01258 (2024) - [i84]Michael E. Sander, Raja Giryes, Taiji Suzuki, Mathieu Blondel, Gabriel Peyré:
How do Transformers perform In-Context Autoregressive Learning? CoRR abs/2402.05787 (2024) - [i83]Shokichi Takakura, Taiji Suzuki:
Mean-field Analysis on Two-layer Neural Networks from a Kernel Perspective. CoRR abs/2403.14917 (2024) - [i82]Michael Poli, Armin W. Thomas, Eric Nguyen, Pragaash Ponnusamy, Björn Deiseroth, Kristian Kersting, Taiji Suzuki, Brian Hie, Stefano Ermon, Christopher Ré, Ce Zhang, Stefano Massaroli:
Mechanistic Design and Scaling of Hybrid Architectures. CoRR abs/2403.17844 (2024) - [i81]Toshimitsu Uesaka, Taiji Suzuki, Yuhta Takida, Chieh-Hsin Lai, Naoki Murata, Yuki Mitsufuji:
Understanding Multimodal Contrastive Learning Through Pointwise Mutual Information. CoRR abs/2404.19228 (2024) - [i80]Rom N. Parnichkun, Stefano Massaroli, Alessandro Moro, Jimmy T. H. Smith, Ramin M. Hasani, Mathias Lechner, Qi An, Christopher Ré, Hajime Asama, Stefano Ermon, Taiji Suzuki, Atsushi Yamashita, Michael Poli:
State-Free Inference of State-Space Models: The Transfer Function Approach. CoRR abs/2405.06147 (2024) - [i79]Naoki Nishikawa, Taiji Suzuki:
State Space Models are Comparable to Transformers in Estimating Functions with Dynamic Smoothness. CoRR abs/2405.19036 (2024) - [i78]Kenji Fukumizu, Taiji Suzuki, Noboru Isobe, Kazusato Oko, Masanori Koyama:
Flow matching achieves minimax optimal convergence. CoRR abs/2405.20879 (2024) - [i77]Jason D. Lee, Kazusato Oko, Taiji Suzuki, Denny Wu:
Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit. CoRR abs/2406.01581 (2024) - [i76]Yihang Chen, Fanghui Liu, Taiji Suzuki, Volkan Cevher:
High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization. CoRR abs/2406.03171 (2024) - [i75]Dake Bu, Wei Huang, Taiji Suzuki, Ji Cheng, Qingfu Zhang, Zhiqiang Xu, Hau-San Wong:
Provably Neural Active Learning Succeeds via Prioritizing Perplexing Samples. CoRR abs/2406.03944 (2024) - [i74]Kazusato Oko, Yujin Song, Taiji Suzuki, Denny Wu:
Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations. CoRR abs/2406.11828 (2024) - [i73]Juno Kim, Tai Nakamaki, Taiji Suzuki:
Transformers are Minimax Optimal Nonparametric In-Context Learners. CoRR abs/2408.12186 (2024) - [i72]Jiarui Jiang, Wei Huang, Miao Zhang, Taiji Suzuki, Liqiang Nie:
Unveil Benign Overfitting for Transformer in Vision: Training Dynamics, Convergence, and Generalization. CoRR abs/2409.19345 (2024) - 2023
- [c88]Shunta Akiyama, Taiji Suzuki:
Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student Settings and its Superiority to Kernel Methods. ICLR 2023 - [c87]Taiji Suzuki, Atsushi Nitanda, Denny Wu:
Uniform-in-time propagation of chaos for the mean-field gradient Langevin dynamics. ICLR 2023 - [c86]Tomoya Murata, Taiji Suzuki:
DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning. ICML 2023: 25523-25548 - [c85]Atsushi Nitanda, Kazusato Oko, Denny Wu, Nobuhito Takenouchi, Taiji Suzuki:
Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems. ICML 2023: 26266-26282 - [c84]Kazusato Oko, Shunta Akiyama, Taiji Suzuki:
Diffusion Models are Minimax Optimal Distribution Estimators. ICML 2023: 26517-26582 - [c83]Atsushi Suzuki, Atsushi Nitanda, Taiji Suzuki, Jing Wang, Feng Tian, Kenji Yamanishi:
Tight and fast generalization error bound of graph embedding in metric space. ICML 2023: 33268-33284 - [c82]Shokichi Takakura, Taiji Suzuki:
Approximation and Estimation Ability of Transformers for Sequence-to-Sequence Functions with Infinite Dimensional Input. ICML 2023: 33416-33447 - [c81]Shuhei Nitta, Taiji Suzuki, Albert Rodríguez Mulet, Atsushi Yaguchi, Ryusuke Hirai:
Scalable Federated Learning for Clients with Different Input Image Sizes and Numbers of Output Categories. ICMLA 2023: 764-769 - [c80]Hiroaki Kingetsu, Kenichi Kobayashi, Taiji Suzuki:
Neural Network Module Decomposition and Recomposition with Superimposed Masks. IJCNN 2023: 1-10 - [c79]Jimmy Ba, Murat A. Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu:
Learning in the Presence of Low-dimensional Structure: A Spiked Random Matrix Perspective. NeurIPS 2023 - [c78]Alireza Mousavi Hosseini, Denny Wu, Taiji Suzuki, Murat A. Erdogdu:
Gradient-Based Feature Learning under Structured Data. NeurIPS 2023 - [c77]Taiji Suzuki, Denny Wu, Atsushi Nitanda:
Mean-field Langevin dynamics: Time-space discretization, stochastic gradient, and variance reduction. NeurIPS 2023 - [c76]Taiji Suzuki, Denny Wu, Kazusato Oko, Atsushi Nitanda:
Feature learning via mean-field Langevin dynamics: classifying sparse parities and beyond. NeurIPS 2023 - [i71]Tomoya Murata, Taiji Suzuki:
DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning. CoRR abs/2302.03884 (2023) - [i70]Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Atsushi Nitanda, Taiji Suzuki:
Koopman-Based Bound for Generalization: New Aspect of Neural Networks Regarding Nonlinear Noise Filtering. CoRR abs/2302.05825 (2023) - [i69]Kazusato Oko, Shunta Akiyama, Taiji Suzuki:
Diffusion Models are Minimax Optimal Distribution Estimators. CoRR abs/2303.01861 (2023) - [i68]Atsushi Nitanda, Kazusato Oko, Denny Wu, Nobuhito Takenouchi, Taiji Suzuki:
Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems. CoRR abs/2303.02957 (2023) - [i67]Atsushi Suzuki, Atsushi Nitanda, Taiji Suzuki, Jing Wang, Feng Tian, Kenji Yamanishi:
Tight and fast generalization error bound of graph embedding in metric space. CoRR abs/2305.07971 (2023) - [i66]Shokichi Takakura, Taiji Suzuki:
Approximation and Estimation Ability of Transformers for Sequence-to-Sequence Functions with Infinite Dimensional Input. CoRR abs/2305.18699 (2023) - [i65]Taiji Suzuki, Denny Wu, Atsushi Nitanda:
Convergence of mean-field Langevin dynamics: Time and space discretization, stochastic gradient, and variance reduction. CoRR abs/2306.07221 (2023) - [i64]Wei Huang, Yuan Cao, Haonan Wang, Xin Cao, Taiji Suzuki:
Graph Neural Networks Provably Benefit from Structural Information: A Feature Learning Perspective. CoRR abs/2306.13926 (2023) - [i63]Kishan Wimalawarne, Taiji Suzuki, Sophie Langer:
Learning Green's Function Efficiently Using Low-Rank Approximations. CoRR abs/2308.00350 (2023) - [i62]Alireza Mousavi Hosseini, Denny Wu, Taiji Suzuki, Murat A. Erdogdu:
Gradient-Based Feature Learning under Structured Data. CoRR abs/2309.03843 (2023) - [i61]Shuhei Nitta, Taiji Suzuki, Albert Rodríguez Mulet, Atsushi Yaguchi, Ryusuke Hirai:
Scalable Federated Learning for Clients with Different Input Image Sizes and Numbers of Output Categories. CoRR abs/2311.08716 (2023) - 2022
- [j29]Chihiro Watanabe, Taiji Suzuki:
Deep two-way matrix reordering for relational data analysis. Neural Networks 146: 303-315 (2022) - [c75]Kishan Wimalawarne, Taiji Suzuki:
Layer-wise Adaptive Graph Convolution Networks Using Generalized Pagerank. ACML 2022: 1117-1132 - [c74]Atsushi Nitanda, Denny Wu, Taiji Suzuki:
Convex Analysis of the Mean Field Langevin Dynamics. AISTATS 2022: 9741-9757 - [c73]Boris Muzellec, Kanji Sato, Mathurin Massias, Taiji Suzuki:
Dimension-free convergence rates for gradient Langevin dynamics in RKHS. COLT 2022: 1356-1420 - [c72]Jimmy Ba, Murat A. Erdogdu, Marzyeh Ghassemi, Shengyang Sun, Taiji Suzuki, Denny Wu, Tianzong Zhang:
Understanding the Variance Collapse of SVGD in High Dimensions. ICLR 2022 - [c71]Kazusato Oko, Taiji Suzuki, Atsushi Nitanda, Denny Wu:
Particle Stochastic Dual Coordinate Ascent: Exponential convergent algorithm for mean field neural network optimization. ICLR 2022 - [c70]Sho Okumoto, Taiji Suzuki:
Learnability of convolutional neural networks for infinite dimensional input via mixed and anisotropic smoothness. ICLR 2022 - [c69]Chenyuan Xu, Kosuke Haruki, Taiji Suzuki, Masahiro Ozawa, Kazuki Uematsu, Ryuji Sakai:
Data-Parallel Momentum Diagonal Empirical Fisher (DP-MDEF):Adaptive Gradient Method is Affected by Hessian Approximation and Multi-Class Data. ICMLA 2022: 1397-1404 - [c68]Kengo Machida, Kuniaki Uto, Koichi Shinoda, Taiji Suzuki:
MSR-DARTS: Minimum Stable Rank of Differentiable Architecture Search. IJCNN 2022: 1-9 - [c67]Jimmy Ba, Murat A. Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu, Greg Yang:
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation. NeurIPS 2022 - [c66]Yuri Kinoshita, Taiji Suzuki:
Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with Variance Reduction and its Application to Optimization. NeurIPS 2022 - [c65]Tomoya Murata, Taiji Suzuki:
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning. NeurIPS 2022 - [c64]Naoki Nishikawa, Taiji Suzuki, Atsushi Nitanda, Denny Wu:
Two-layer neural network on infinite dimensional data: global optimization guarantee in the mean-field regime. NeurIPS 2022 - [c63]Hiroaki Mikami, Kenji Fukumizu, Shogo Murai, Shuji Suzuki, Yuta Kikuchi, Taiji Suzuki, Shin-ichi Maeda, Kohei Hayashi:
A Scaling Law for Syn2real Transfer: How Much Is Your Pre-training Effective? ECML/PKDD (3) 2022: 477-492 - [i60]Atsushi Nitanda, Denny Wu, Taiji Suzuki:
Convex Analysis of the Mean Field Langevin Dynamics. CoRR abs/2201.10469 (2022) - [i59]Tomoya Murata, Taiji Suzuki:
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning. CoRR abs/2202.06083 (2022) - [i58]Yuri Kinoshita, Taiji Suzuki:
Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with Variance Reduction and its Application to Optimization. CoRR abs/2203.16217 (2022) - [i57]Jimmy Ba, Murat A. Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu, Greg Yang:
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation. CoRR abs/2205.01445 (2022) - [i56]Shunta Akiyama, Taiji Suzuki:
Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student Settings and its Superiority to Kernel Methods. CoRR abs/2205.14818 (2022) - [i55]Kazusato Oko, Shunta Akiyama, Tomoya Murata, Taiji Suzuki:
Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning. CoRR abs/2209.00361 (2022) - [i54]Kishan Wimalawarne, Taiji Suzuki:
Graph Polynomial Convolution Models for Node Classification of Non-Homophilous Graphs. CoRR abs/2209.05020 (2022) - 2021
- [j28]Chihiro Watanabe, Taiji Suzuki:
Goodness-of-fit test for latent block models. Comput. Stat. Data Anal. 154: 107090 (2021) - [j27]Atsushi Nitanda, Tomoya Murata, Taiji Suzuki:
Sharp characterization of optimal minibatch size for stochastic finite sum convex optimization. Knowl. Inf. Syst. 63(9): 2513-2539 (2021) - [c62]Shingo Yashima, Atsushi Nitanda, Taiji Suzuki:
Exponential Convergence Rates of Classification Errors on Learning with SGD and Random Features. AISTATS 2021: 1954-1962 - [c61]Tomoya Murata, Taiji Suzuki:
Gradient Descent in RKHS with Importance Labeling. AISTATS 2021: 1981-1989 - [c60]Shun-ichi Amari, Jimmy Ba, Roger Baker Grosse, Xuechen Li, Atsushi Nitanda, Taiji Suzuki, Denny Wu, Ji Xu:
When does preconditioning help or hurt generalization? ICLR 2021 - [c59]Atsushi Nitanda, Taiji Suzuki:
Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime. ICLR 2021 - [c58]Taiji Suzuki, Shunta Akiyama:
Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods. ICLR 2021 - [c57]Shunta Akiyama, Taiji Suzuki:
On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting. ICML 2021: 152-162 - [c56]Tomoya Murata, Taiji Suzuki:
Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning. ICML 2021: 7872-7881 - [c55]Akira Nakagawa, Keizo Kato, Taiji Suzuki:
Quantitative Understanding of VAE as a Non-linearly Scaled Isometric Embedding. ICML 2021: 7916-7926 - [c54]Atsushi Yaguchi, Taiji Suzuki, Shuhei Nitta, Yukinobu Sakata, Akiyuki Tanizawa:
Decomposable-Net: Scalable Low-Rank Compression for Neural Networks. IJCAI 2021: 3249-3256 - [c53]Taiji Suzuki, Atsushi Nitanda:
Deep learning is adaptive to intrinsic dimensionality of model smoothness in anisotropic Besov space. NeurIPS 2021: 3609-3621 - [c52]Stefano Massaroli, Michael Poli, Sho Sonoda, Taiji Suzuki, Jinkyoo Park, Atsushi Yamashita, Hajime Asama:
Differentiable Multiple Shooting Layers. NeurIPS 2021: 16532-16544 - [c51]Atsushi Nitanda, Denny Wu, Taiji Suzuki:
Particle Dual Averaging: Optimization of Mean Field Neural Network with Global Convergence Rate Analysis. NeurIPS 2021: 19608-19621 - [c50]Chihiro Watanabe, Taiji Suzuki:
AutoLL: Automatic Linear Layout of Graphs based on Deep Neural Network. SSCI 2021: 1-10 - [i53]Tomoya Murata, Taiji Suzuki:
Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning. CoRR abs/2102.03198 (2021) - [i52]Chihiro Watanabe, Taiji Suzuki:
Goodness-of-fit Test on the Number of Biclusters in Relational Data Matrix. CoRR abs/2102.11658 (2021) - [i51]Chihiro Watanabe, Taiji Suzuki:
Deep Two-way Matrix Reordering for Relational Data Analysis. CoRR abs/2103.14203 (2021) - [i50]Stefano Massaroli, Michael Poli, Sho Sonoda, Taiji Suzuki, Jinkyoo Park, Atsushi Yamashita, Hajime Asama:
Differentiable Multiple Shooting Layers. CoRR abs/2106.03885 (2021) - [i49]Shunta Akiyama, Taiji Suzuki:
On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting. CoRR abs/2106.06251 (2021) - [i48]Chihiro Watanabe, Taiji Suzuki:
AutoLL: Automatic Linear Layout of Graphs based on Deep Neural Network. CoRR abs/2108.02431 (2021) - [i47]Kishan Wimalawarne, Taiji Suzuki:
Adaptive and Interpretable Graph Convolution Networks Using Generalized Pagerank. CoRR abs/2108.10636 (2021) - [i46]Hiroaki Mikami, Kenji Fukumizu, Shogo Murai, Shuji Suzuki, Yuta Kikuchi, Taiji Suzuki, Shin-ichi Maeda, Kohei Hayashi:
A Scaling Law for Synthetic-to-Real Transfer: A Measure of Pre-Training. CoRR abs/2108.11018 (2021) - [i45]Hiroaki Kingetsu, Kenichi Kobayashi, Taiji Suzuki:
Neural Network Module Decomposition and Recomposition. CoRR abs/2112.13208 (2021) - 2020
- [j26]Shaogao Lv, Zengyan Fan, Heng Lian, Taiji Suzuki, Kenji Fukumizu:
A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model. Comput. Stat. Data Anal. 152: 107039 (2020) - [j25]Masaaki Takada, Taiji Suzuki, Hironori Fujisawa:
Independently Interpretable Lasso for Generalized Linear Models. Neural Comput. 32(6): 1168-1221 (2020) - [j24]Satoshi Hayakawa, Taiji Suzuki:
On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces. Neural Networks 123: 343-361 (2020) - [c49]Jingling Li, Yanchao Sun, Jiahao Su, Taiji Suzuki, Furong Huang:
Understanding Generalization in Deep Learning via Tensor Methods. AISTATS 2020: 504-515 - [c48]Atsushi Nitanda, Taiji Suzuki:
Functional Gradient Boosting for Learning Residual-like Networks with Statistical Guarantees. AISTATS 2020: 2981-2991 - [c47]Laurent Dillard, Yosuke Shinya, Taiji Suzuki:
Domain Adaptation Regularization for Spectral Pruning. BMVC 2020 - [c46]Jimmy Ba, Murat A. Erdogdu, Taiji Suzuki, Denny Wu, Tianzong Zhang:
Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint. ICLR 2020 - [c45]Kenta Oono, Taiji Suzuki:
Graph Neural Networks Exponentially Lose Expressive Power for Node Classification. ICLR 2020 - [c44]Taiji Suzuki, Hiroshi Abe, Tomoaki Nishimura:
Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network. ICLR 2020 - [c43]Taiji Suzuki, Hiroshi Abe, Tomoya Murata, Shingo Horiuchi, Kotaro Ito, Tokuma Wachi, So Hirai, Masatoshi Yukishima, Tomoaki Nishimura:
Spectral Pruning: Compressing Deep Neural Networks via Spectral Analysis and its Generalization Error. IJCAI 2020: 2839-2846 - [c42]Kenta Oono, Taiji Suzuki:
Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks. NeurIPS 2020 - [c41]Taiji Suzuki:
Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics. NeurIPS 2020 - [i44]Jingling Li, Yanchao Sun, Jiahao Su, Taiji Suzuki, Furong Huang:
Understanding Generalization in Deep Learning via Tensor Methods. CoRR abs/2001.05070 (2020) - [i43]Boris Muzellec, Kanji Sato, Mathurin Massias, Taiji Suzuki:
Dimension-free convergence rates for gradient Langevin dynamics in RKHS. CoRR abs/2003.00306 (2020) - [i42]Yusuke Hayashi, Taiji Suzuki:
Meta Cyclical Annealing Schedule: A Simple Approach to Avoiding Meta-Amortization Error. CoRR abs/2003.01889 (2020) - [i41]Chihiro Watanabe, Taiji Suzuki:
Selective Inference for Latent Block Models. CoRR abs/2005.13273 (2020) - [i40]Kenta Oono, Taiji Suzuki:
Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks. CoRR abs/2006.08550 (2020) - [i39]Shun-ichi Amari, Jimmy Ba, Roger B. Grosse, Xuechen Li, Atsushi Nitanda, Taiji Suzuki, Denny Wu, Ji Xu:
When Does Preconditioning Help or Hurt Generalization? CoRR abs/2006.10732 (2020) - [i38]Tomoya Murata, Taiji Suzuki:
Gradient Descent in RKHS with Importance Labeling. CoRR abs/2006.10925 (2020) - [i37]Atsushi Nitanda, Taiji Suzuki:
Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime. CoRR abs/2006.12297 (2020) - [i36]Taiji Suzuki:
Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics. CoRR abs/2007.05824 (2020) - [i35]Kengo Machida, Kuniaki Uto, Koichi Shinoda, Taiji Suzuki:
Neural Architecture Search Using Stable Rank of Convolutional Layers. CoRR abs/2009.09209 (2020) - [i34]Kazuma Tsuji, Taiji Suzuki:
Estimation error analysis of deep learning on the regression problem on the variable exponent Besov space. CoRR abs/2009.11285 (2020) - [i33]Taiji Suzuki, Shunta Akiyama:
Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods. CoRR abs/2012.03224 (2020) - [i32]Atsushi Nitanda, Denny Wu, Taiji Suzuki:
Particle Dual Averaging: Optimization of Mean Field Neural Networks with Global Convergence Rate Analysis. CoRR abs/2012.15477 (2020)
2010 – 2019
- 2019
- [c40]Atsushi Nitanda, Taiji Suzuki:
Stochastic Gradient Descent with Exponential Convergence Rates of Expected Classification Errors. AISTATS 2019: 1417-1426 - [c39]Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, Yukihiro Tagami, Taiji Suzuki:
Cross-Domain Recommendation via Deep Domain Adaptation. ECIR (2) 2019: 20-29 - [c38]Yosuke Shinya, Edgar Simo-Serra, Taiji Suzuki:
Understanding the Effects of Pre-Training for Object Detectors via Eigenspectrum. ICCV Workshops 2019: 1931-1941 - [c37]Atsushi Nitanda, Tomoya Murata, Taiji Suzuki:
Sharp Characterization of Optimal Minibatch Size for Stochastic Finite Sum Convex Optimization. ICDM 2019: 488-497 - [c36]Taiji Suzuki:
Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality. ICLR (Poster) 2019 - [c35]Kenta Oono, Taiji Suzuki:
Approximation and non-parametric estimation of ResNet-type convolutional neural networks. ICML 2019: 4922-4931 - [e1]Wee Sun Lee, Taiji Suzuki:
Proceedings of The 11th Asian Conference on Machine Learning, ACML 2019, 17-19 November 2019, Nagoya, Japan. Proceedings of Machine Learning Research 101, PMLR 2019 [contents] - [i31]Kenta Oono, Taiji Suzuki:
Approximation and Non-parametric Estimation of ResNet-type Convolutional Neural Networks. CoRR abs/1903.10047 (2019) - [i30]Satoshi Hayakawa, Taiji Suzuki:
On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces. CoRR abs/1905.09195 (2019) - [i29]Atsushi Nitanda, Taiji Suzuki:
Refined Generalization Analysis of Gradient Descent for Over-parameterized Two-layer Neural Networks with Smooth Activations on Classification Problems. CoRR abs/1905.09870 (2019) - [i28]Kenta Oono, Taiji Suzuki:
On Asymptotic Behaviors of Graph CNNs from Dynamical Systems Perspective. CoRR abs/1905.10947 (2019) - [i27]Tomoya Murata, Taiji Suzuki:
Accelerated Sparsified SGD with Error Feedback. CoRR abs/1905.12224 (2019) - [i26]Chihiro Watanabe, Taiji Suzuki:
Goodness-of-fit Test for Latent Block Models. CoRR abs/1906.03886 (2019) - [i25]Kosuke Haruki, Taiji Suzuki, Yohei Hamakawa, Takeshi Toda, Ryuji Sakai, Masahiro Ozawa, Mitsuhiro Kimura:
Gradient Noise Convolution (GNC): Smoothing Loss Function for Distributed Large-Batch SGD. CoRR abs/1906.10822 (2019) - [i24]Yosuke Shinya, Edgar Simo-Serra, Taiji Suzuki:
Understanding the Effects of Pre-Training for Object Detectors via Eigenspectrum. CoRR abs/1909.04021 (2019) - [i23]Taiji Suzuki:
Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network. CoRR abs/1909.11274 (2019) - [i22]Taiji Suzuki, Atsushi Nitanda:
Deep learning is adaptive to intrinsic dimensionality of model smoothness in anisotropic Besov space. CoRR abs/1910.12799 (2019) - [i21]Atsushi Yaguchi, Taiji Suzuki, Shuhei Nitta, Yukinobu Sakata, Akiyuki Tanizawa:
Scalable Deep Neural Networks via Low-Rank Matrix Factorization. CoRR abs/1910.13141 (2019) - [i20]Shingo Yashima, Atsushi Nitanda, Taiji Suzuki:
Exponential Convergence Rates of Classification Errors on Learning with SGD and Random Features. CoRR abs/1911.05350 (2019) - [i19]Laurent Dillard, Yosuke Shinya, Taiji Suzuki:
Domain Adaptation Regularization for Spectral Pruning. CoRR abs/1912.11853 (2019) - 2018
- [j23]Yuichi Mori, Taiji Suzuki:
Generalized ridge estimator and model selection criteria in multivariate linear regression. J. Multivar. Anal. 165: 243-261 (2018) - [c34]Masaaki Takada, Taiji Suzuki, Hironori Fujisawa:
Independently Interpretable Lasso: A New Regularizer for Sparse Regression with Uncorrelated Variables. AISTATS 2018: 454-463 - [c33]Atsushi Nitanda, Taiji Suzuki:
Gradient Layer: Enhancing the Convergence of Adversarial Training for Generative Models. AISTATS 2018: 1008-1016 - [c32]Taiji Suzuki:
Fast generalization error bound of deep learning from a kernel perspective. AISTATS 2018: 1397-1406 - [c31]Kazuo Yonekura, Hitoshi Hattori, Taiji Suzuki:
Short-term local weather forecast using dense weather station by deep neural network. IEEE BigData 2018: 1683-1690 - [c30]Atsushi Nitanda, Taiji Suzuki:
Functional Gradient Boosting based on Residual Network Perception. ICML 2018: 3816-3825 - [c29]Atsushi Yaguchi, Taiji Suzuki, Wataru Asano, Shuhei Nitta, Yukinobu Sakata, Akiyuki Tanizawa:
Adam Induces Implicit Weight Sparsity in Rectifier Neural Networks. ICMLA 2018: 318-325 - [c28]Tomoya Murata, Taiji Suzuki:
Sample Efficient Stochastic Gradient Iterative Hard Thresholding Method for Stochastic Sparse Linear Regression with Limited Attribute Observation. NeurIPS 2018: 5317-5326 - [i18]Atsushi Nitanda, Taiji Suzuki:
Gradient Layer: Enhancing the Convergence of Adversarial Training for Generative Models. CoRR abs/1801.02227 (2018) - [i17]Atsushi Nitanda, Taiji Suzuki:
Functional Gradient Boosting based on Residual Network Perception. CoRR abs/1802.09031 (2018) - [i16]Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, Yukihiro Tagami, Taiji Suzuki:
Cross-domain Recommendation via Deep Domain Adaptation. CoRR abs/1803.03018 (2018) - [i15]Atsushi Nitanda, Taiji Suzuki:
Stochastic Gradient Descent with Exponential Convergence Rates of Expected Classification Errors. CoRR abs/1806.05438 (2018) - [i14]Taiji Suzuki, Hiroshi Abe, Tomoya Murata, Shingo Horiuchi, Kotaro Ito, Tokuma Wachi, So Hirai, Masatoshi Yukishima, Tomoaki Nishimura:
Spectral-Pruning: Compressing deep neural network via spectral analysis. CoRR abs/1808.08558 (2018) - [i13]Tomoya Murata, Taiji Suzuki:
Sample Efficient Stochastic Gradient Iterative Hard Thresholding Method for Stochastic Sparse Linear Regression with Limited Attribute Observation. CoRR abs/1809.01765 (2018) - [i12]Taiji Suzuki:
Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality. CoRR abs/1810.08033 (2018) - [i11]Atsushi Yaguchi, Taiji Suzuki, Wataru Asano, Shuhei Nitta, Yukinobu Sakata, Akiyuki Tanizawa:
Adam Induces Implicit Weight Sparsity in Rectifier Neural Networks. CoRR abs/1812.08119 (2018) - 2017
- [c27]Atsushi Nitanda, Taiji Suzuki:
Stochastic Difference of Convex Algorithm and its Application to Training Deep Boltzmann Machines. AISTATS 2017: 470-478 - [c26]Tomoya Murata, Taiji Suzuki:
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization. NIPS 2017: 608-617 - [c25]Song Liu, Akiko Takeda, Taiji Suzuki, Kenji Fukumizu:
Trimmed Density Ratio Estimation. NIPS 2017: 4518-4528 - [i10]Tomoya Murata, Taiji Suzuki:
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization. CoRR abs/1703.00439 (2017) - [i9]Taiji Suzuki:
Fast learning rate of deep learning via a kernel perspective. CoRR abs/1705.10182 (2017) - [i8]Atsushi Nitanda, Taiji Suzuki:
Stochastic Particle Gradient Descent for Infinite Ensembles. CoRR abs/1712.05438 (2017) - 2016
- [j22]Yoshito Hirata, Kai Morino, Taiji Suzuki, Qian Guo, Hiroshi Fukuhara, Kazuyuki Aihara:
System identification and parameter estimation in mathematical medicine: examples demonstrated for prostate cancer. Quant. Biol. 4(1): 13-19 (2016) - [c24]Song Liu, Taiji Suzuki, Masashi Sugiyama, Kenji Fukumizu:
Structure Learning of Partitioned Markov Networks. ICML 2016: 439-448 - [c23]Heishiro Kanagawa, Taiji Suzuki, Hayato Kobayashi, Nobuyuki Shimizu, Yukihiro Tagami:
Gaussian process nonparametric tensor estimator and its minimax optimality. ICML 2016: 1632-1641 - [c22]Taiji Suzuki, Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, Yukihiro Tagami:
Minimax Optimal Alternating Minimization for Kernel Nonparametric Tensor Learning. NIPS 2016: 3783-3791 - [i7]Tomoya Murata, Taiji Suzuki:
Stochastic dual averaging methods using variance reduction techniques for regularized empirical risk minimization problems. CoRR abs/1603.02412 (2016) - 2015
- [c21]Song Liu, Taiji Suzuki, Masashi Sugiyama:
Support Consistency of Direct Sparse-Change Learning in Markov Networks. AAAI 2015: 2785-2791 - [c20]Satoshi Hara, Tetsuro Morimura, Toshihiro Takahashi, Hiroki Yanagisawa, Taiji Suzuki:
A Consistent Method for Graph Based Anomaly Localization. AISTATS 2015 - [c19]Taiji Suzuki:
Convergence rate of Bayesian tensor estimator and its minimax optimality. ICML 2015: 1273-1282 - 2014
- [j21]Song Liu, John A. Quinn, Michael U. Gutmann, Taiji Suzuki, Masashi Sugiyama:
Direct Learning of Sparse Changes in Markov Networks by Density Ratio Estimation. Neural Comput. 26(6): 1169-1197 (2014) - [c18]Taiji Suzuki:
Stochastic Dual Coordinate Ascent with Alternating Direction Method of Multipliers. ICML 2014: 736-744 - [i6]Taiji Suzuki:
Convergence rate of Bayesian tensor estimator: Optimal rate without restricted strong convexity. CoRR abs/1408.3092 (2014) - 2013
- [j20]Masashi Sugiyama, Song Liu, Marthinus Christoffel du Plessis, Masao Yamanaka, Makoto Yamada, Taiji Suzuki, Takafumi Kanamori:
Direct Divergence Approximation between Probability Distributions and Its Applications in Machine Learning. J. Comput. Sci. Eng. 7(2): 99-111 (2013) - [j19]Takafumi Kanamori, Akiko Takeda, Taiji Suzuki:
Conjugate relation between loss functions and uncertainty sets in classification problems. J. Mach. Learn. Res. 14(1): 1461-1504 (2013) - [j18]Taiji Suzuki:
Improvement of multiple kernel learning using adaptively weighted regularization. JSIAM Lett. 5: 49-52 (2013) - [j17]Takafumi Kanamori, Taiji Suzuki, Masashi Sugiyama:
Computational complexity of kernel-based density-ratio estimation: a condition number analysis. Mach. Learn. 90(3): 431-460 (2013) - [j16]Taiji Suzuki, Masashi Sugiyama:
Sufficient Dimension Reduction via Squared-Loss Mutual Information Estimation. Neural Comput. 25(3): 725-758 (2013) - [j15]Makoto Yamada, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, Masashi Sugiyama:
Relative Density-Ratio Estimation for Robust Distribution Comparison. Neural Comput. 25(5): 1324-1370 (2013) - [j14]Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Marthinus Christoffel du Plessis, Song Liu, Ichiro Takeuchi:
Density-Difference Estimation. Neural Comput. 25(10): 2734-2775 (2013) - [c17]Taiji Suzuki:
Dual Averaging and Proximal Gradient Descent for Online Alternating Direction Multiplier Method. ICML (1) 2013: 392-400 - [c16]Ryota Tomioka, Taiji Suzuki:
Convex Tensor Decomposition via Structured Schatten Norm Regularization. NIPS 2013: 1331-1339 - [i5]Ryota Tomioka, Taiji Suzuki:
Convex Tensor Decomposition via Structured Schatten Norm Regularization. CoRR abs/1303.6370 (2013) - 2012
- [b1]Masashi Sugiyama, Taiji Suzuki, Takafumi Kanamori:
Density Ratio Estimation in Machine Learning. Cambridge University Press 2012, ISBN 978-0-521-19017-6, pp. I-XII, 1-329 - [j13]Takafumi Kanamori, Taiji Suzuki, Masashi Sugiyama:
Statistical analysis of kernel-based least-squares density-ratio estimation. Mach. Learn. 86(3): 335-367 (2012) - [j12]Takafumi Kanamori, Taiji Suzuki, Masashi Sugiyama:
f-Divergence Estimation and Two-Sample Homogeneity Test Under Semiparametric Density-Ratio Models. IEEE Trans. Inf. Theory 58(2): 708-720 (2012) - [c15]Taiji Suzuki:
PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model. COLT 2012: 8.1-8.20 - [c14]Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Marthinus Christoffel du Plessis, Song Liu, Ichiro Takeuchi:
Density-Difference Estimation. NIPS 2012: 692-700 - [c13]Takafumi Kanamori, Akiko Takeda, Taiji Suzuki:
A Conjugate Property between Loss Functions and Uncertainty Sets in Classification Problems. COLT 2012: 29.1-29.23 - [c12]Taiji Suzuki, Masashi Sugiyama:
Fast Learning Rate of Multiple Kernel Learning: Trade-Off between Sparsity and Smoothness. AISTATS 2012: 1152-1183 - [i4]Takafumi Kanamori, Akiko Takeda, Taiji Suzuki:
A Conjugate Property between Loss Functions and Uncertainty Sets in Classification Problems. CoRR abs/1204.6583 (2012) - [i3]Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Marthinus Christoffel du Plessis, Song Liu, Ichiro Takeuchi:
Density-Difference Estimation. CoRR abs/1207.0099 (2012) - 2011
- [j11]Masashi Sugiyama, Taiji Suzuki:
Least-Squares Independence Test. IEICE Trans. Inf. Syst. 94-D(6): 1333-1336 (2011) - [j10]Ryota Tomioka, Taiji Suzuki, Masashi Sugiyama:
Super-Linear Convergence of Dual Augmented Lagrangian Algorithm for Sparsity Regularized Estimation. J. Mach. Learn. Res. 12: 1537-1586 (2011) - [j9]Taiji Suzuki, Ryota Tomioka:
SpicyMKL: a fast algorithm for Multiple Kernel Learning with thousands of kernels. Mach. Learn. 85(1-2): 77-108 (2011) - [j8]Taiji Suzuki, Masashi Sugiyama:
Least-Squares Independent Component Analysis. Neural Comput. 23(1): 284-301 (2011) - [j7]Masashi Sugiyama, Makoto Yamada, Paul von Bünau, Taiji Suzuki, Takafumi Kanamori, Motoaki Kawanabe:
Direct density-ratio estimation with dimensionality reduction via least-squares hetero-distributional subspace search. Neural Networks 24(2): 183-198 (2011) - [j6]Masashi Sugiyama, Taiji Suzuki, Yuta Itoh, Takafumi Kanamori, Manabu Kimura:
Least-squares two-sample test. Neural Networks 24(7): 735-751 (2011) - [c11]Makoto Yamada, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, Masashi Sugiyama:
Relative Density-Ratio Estimation for Robust Distribution Comparison. NIPS 2011: 594-602 - [c10]Ryota Tomioka, Taiji Suzuki, Kohei Hayashi, Hisashi Kashima:
Statistical Performance of Convex Tensor Decomposition. NIPS 2011: 972-980 - [c9]Taiji Suzuki:
Unifying Framework for Fast Learning Rate of Non-Sparse Multiple Kernel Learning. NIPS 2011: 1575-1583 - 2010
- [j5]Masashi Sugiyama, Ichiro Takeuchi, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, Daisuke Okanohara:
Least-Squares Conditional Density Estimation. IEICE Trans. Inf. Syst. 93-D(3): 583-594 (2010) - [j4]Takafumi Kanamori, Taiji Suzuki, Masashi Sugiyama:
Theoretical Analysis of Density Ratio Estimation. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 93-A(4): 787-798 (2010) - [c8]Ryota Tomioka, Taiji Suzuki, Masashi Sugiyama, Hisashi Kashima:
A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices. ICML 2010: 1087-1094 - [c7]Masashi Sugiyama, Satoshi Hara, Paul von Bünau, Taiji Suzuki, Takafumi Kanamori, Motoaki Kawanabe:
Direct Density Ratio Estimation with Dimensionality Reduction. SDM 2010: 595-606 - [c6]Masashi Sugiyama, Ichiro Takeuchi, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya, Daisuke Okanohara:
Conditional Density Estimation via Least-Squares Density Ratio Estimation. AISTATS 2010: 781-788 - [c5]Taiji Suzuki, Masashi Sugiyama:
Sufficient Dimension Reduction via Squared-loss Mutual Information Estimation. AISTATS 2010: 804-811 - [i2]Ryota Tomioka, Taiji Suzuki:
Regularization Strategies and Empirical Bayesian Learning for MKL. CoRR abs/1011.3090 (2010)
2000 – 2009
- 2009
- [j3]Taiji Suzuki, Masashi Sugiyama, Takafumi Kanamori, Jun Sese:
Mutual information estimation reveals global associations between stimuli and biological processes. BMC Bioinform. 10(S-1) (2009) - [j2]Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Shohei Hido, Jun Sese, Ichiro Takeuchi, Liwei Wang:
A Density-ratio Framework for Statistical Data Processing. Inf. Media Technol. 4(4): 962-987 (2009) - [j1]Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Shohei Hido, Jun Sese, Ichiro Takeuchi, Liwei Wang:
A Density-ratio Framework for Statistical Data Processing. IPSJ Trans. Comput. Vis. Appl. 1: 183-208 (2009) - [c4]Taiji Suzuki, Masashi Sugiyama:
Estimating Squared-Loss Mutual Information for Independent Component Analysis. ICA 2009: 130-137 - [c3]Taiji Suzuki, Masashi Sugiyama, Toshiyuki Tanaka:
Mutual information approximation via maximum likelihood estimation of density ratio. ISIT 2009: 463-467 - [i1]Ryota Tomioka, Taiji Suzuki, Masashi Sugiyama:
Super-Linear Convergence of Dual Augmented-Lagrangian Algorithm for Sparsity Regularized Estimation. CoRR abs/0911.4046 (2009) - 2008
- [c2]Taiji Suzuki, Masashi Sugiyama, Jun Sese, Takafumi Kanamori:
Approximating Mutual Information by Maximum Likelihood Density Ratio Estimation. FSDM 2008: 5-20 - 2005
- [c1]Taiji Suzuki, Takamasa Koshizen, Kazuyuki Aihara, Hiroshi Tsujino:
Learning to estimate user interest utilizing the variational Bayes estimator. ISDA 2005: 94-99
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-18 19:32 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint