Kaleido's Personal Page
Article Digests
  • Pure Alogrithm
    • Convolutional Networks with Adaptive Inference Graphs
    • Dynamic Resolution Network
    • Dynamic Neural Networks: A Survey
    • Reducing overfitting in deep networks by decorrelating representations
    • Regularizing cnns with locally constrained decorrelations
    • U-Net: Convolutional Networks for Biomedical Image Segmentation
    • Semi-Supervised Classification with Graph Convolutional Networks
  • CV
    • 3D
      • 3D Classification
        • PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
      • 3D Detection
        • PointPillars: Fast Encoders for Object Detection from Point Clouds
    • BackBone
      • An image is worth 16x16 words: Transformers for image recognition at scale
    • Image Detection
      • End-to-End Object Detection with Transformers
    • LLCV
      • Image Denoise
        • Transfer Learning from Synthetic to Real-Noise Denoising with Adaptive Instance Normalization
        • Brief review of image denoising techniques
        • Toward Convolutional Blind Denoising of Real Photographs
        • Deploying Image Deblurring across Mobile Devices: A Perspective of Quality and Latency
        • Benchmarking Denoising Algorithms with Real Photographs
        • Deep Learning for Image Denoising: A Survey
        • Deep Learning on Image Denoising An overview
        • Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising
        • Dynamic Residual Dense Network for Image Denoising
        • Aligned Structured Sparsity Learning for Efficient Image Super-Resolution
        • FFDNet: Toward a Fast and Flexible Solution for CNN based Image Denoising
        • Image Blind Denoising With Generative Adversarial Network Based Noise Modeling
        • HINet: Half Instance Normalization Network for Image Restoration
        • Learning Deep CNN Denoiser Prior for Image Restoration
        • Learning Raw Image Denoising with Bayer Pattern Unification and Bayer Preserving Augmentation
        • Neural Nearest Neighbors Network
        • Practical Deep Raw Image Denoising on Mobile Devices
        • Generalized Deep Image to Image Regression
        • Real Image Denoising with Feature Attention
        • Spatial-Adaptive Network for Single Image Denoising
        • A High-Quality Denoising Dataset for Smartphone Cameras
        • Robust Image Denoising with Texture-Aware Neural Network
        • Unprocessing Images for Learned Raw Denoising
        • Noise2Noise: Learning Image Restoration without Clean Data
      • Low Light Enhancement
        • Restoring Extremely Dark Images in Real Time
        • Learning to See in the Dark
      • Restoration
        • CLEARER: Multi-Scale Neural Architecture Search for Image Restoration
        • CycleISP: Real Image Restoration via Improved Data Synthesis
        • Deep Image Prior
        • Learning Enriched Features for Real Image Restoration and Enhancement
        • Multi-Stage Progressive Image Restoration
        • MemNet: A Persistent Memory Network for Image Restoration
        • Self-Guided Network for Fast Image Denoising
        • Stacking Networks Dynamically for Image Restoration Based on the Plug-and-Play Framework
        • Memory-Efficient Hierarchical Neural Architecture Search for Image Denoising
        • Attentive Fine-Grained Structured Sparsity for Image Restoration
        • Restormer: Efficient Transformer for High-Resolution Image Restoration
        • Uformer: A General U-Shaped Transformer for Image Restoration
        • Searching for Controllable Image Restoration Networks
        • Enhanced Image Restoration Via Supervised Target Feature Transfer
      • Super Resolution
        • A Layer-Wise Extreme Network Compression for Super Resolution
        • Aligned Structured Sparsity Learning for Efficient Image Super-Resolution
        • Binarized Neural Network for Single Image Super Resolution
        • Fully !antized Image Super-Resolution Networks
        • PAMS: Quantized Super-Resolution via Parameterized Max Scale
        • Deep Learning for Image Super-resolution: A Survey
        • Training Binary Neural Network without Batch Normalization for Image Super-Resolution
        • Video super‑resolution based on deep learning: a comprehensive survey
        • BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond
        • Enhanced Deep Residual Networks for Single Image Super-Resolution
        • Extremely Lightweight Quantization Robust Real-Time Single-Image Super Resolution for Mobile Devices
        • CADyQ: Content-Aware Dynamic Quantization for Image Super-Resolution
        • Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
        • Dynamic Dual Trainable Bounds for Ultra-low Precision Super-Resolution Networks
        • Adaptive Patch Exiting for Scalable Single Image Super-Resolution
        • Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
        • Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning
        • Fine-grained neural architecture search for image super-resolution
        • Fast and Memory-Efficient Network Towards Efficient Image Super-Resolution
        • DAQ: Channel-Wise Distribution-Aware Quantization for Deep Image Super-Resolution Networks
        • Wide Activation for Efficient and Accurate Image Super-Resolution
    • Uncategorized
      • ABPN: Adaptive Blend Pyramid Network for Real-Time Local Retouching of Ultra High-Resolution Photo
      • Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation
  • Computer Architecture
    • A Survey of Computer Architecture Simulation Techniques and Tools
    • DNNAbacus: Toward Accurate Computational Cost Prediction for Deep Neural Networks
    • MAPLE-Edge: A Runtime Latency Predictor for Edge Devices
    • STONNE: Enabling Cycle-Level Microarchitectural Simulation for DNN Inference Accelerators
    • An End-To-End Toolchain: From Automated Cost Modeling to Static WCET and WCEC Analysis
    • MLPerf Mobile Inference Benchmark
    • torch. fx: Practical program capture and transformation for deep learning in python
    • nn-Meter: Towards Accurate Latency Prediction of Deep-Learning Model Inference on Diverse Edge Devices
    • EcoFlow Efficient Convolutional Dataflows on Low-Power Neural Network Accelerators
  • Model Compression
    • GhostNet: More Features from Cheap Operations
    • MCUNet: Tiny Deep Learning on IoT Devices
    • MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
    • TinyTL: Reduce Activations, Not Trainable Parameters for Efficient On-Device Learning
    • 深度神经网络压缩与加速综述
    • BNN related articles
      • Towards Accurate Binary Convolutional Neural Network
      • BATS: Binary ArchitecTure Search
      • Bayesian Optimized 1-Bit CNNs
      • Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm
      • DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
      • Learning Frequency Domain Approximation for Binary Neural Networks
      • High-Capacity Expert Binary Networks
      • Learning Channel-wise Interactions for Binary Convolutional Neural Networks
      • ReCU: Reviving the Dead Weights in Binary Neural Networks
      • Training Binary Neural Networks with Real-to-Binary Convlutions
      • Training Binary Neural Networks through Learning with Noisy Supervision
      • Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets
      • WRPN: Wide Reduced-Precision Networks
      • XNOR-Net
      • PokeBNN: A Binary Pursuit of Lightweight Accuracy
      • ReActNet Towards Precise Binary Neural Network with Generalized Activation Functions
      • Bitwise Neural Networks
      • BiT: Robustly Binarized Multi-distilled Transformer
      • BinaryDuo: Reducing Gradient Mismatch in Binary Activation Network by Coupling Binary Activations
      • BoolNet: Minimizing the Energy Consumption of Binary Neural Networks
      • An Empirical study of Binary Neural Networks’ Optimisation
    • Deployment
      • Deploying Image Deblurring across Mobile Devices: A Perspective of Quality and Latency
      • Fast Camera Image Denoising on Mobile GPUs with Deep Learning
    • Kownledge Distillation
      • A Comprehensive Overhaul of Feature Distillation
      • Improve object detection with feature-based knowledge distillation: Towards accurate and efficient detectors
    • ML System
      • A systematic methodology for analysis of deep learning hardware and software platforms
      • Precious: Resource-Demand Estimation for Embedded Neural Network Accelerators
      • Learned TPU Cost Model for XLA Tensor Programs
    • NAS
      • Neural Architecture Search for Dense Prediction Tasks in Computer Vision
      • A Generic Graph-Based Neural Architecture Encoding Scheme for Predictor-Based NAS
      • Neural Predictor for Neural Architecture Search
      • NAS-BENCH-201: Extending the Scope of Reproducible Neural Architecture Search
      • A Generic Graph-based Neural Architecture Encoding Scheme with Multifaceted Information
    • Pruning
      • Architecture-Aware Network Pruning for Vision Quality Applications
      • Structured Pruning of Neural Networks with Budget-Aware Regularization
      • ECC: Platform-Independent Energy-Constrained Deep Neural Network Compression via a Bilinear Regression Model
      • Revisiting Random Channel Pruning for Neural Network Compression
      • DHP: Differentiable Meta Pruning via HyperNetworks
      • Universally Slimmable Networks and Improved Training Techniques
      • SLIMMABLE NEURAL NETWORKS
      • Learning N: M Fine-grained Structured Sparse Neural Networks From Scratch
      • AutoSlim: Towards One-Shot Architecture Search for Channel Numbers
    • Quantization
      • A Survey of Quantization Methods for Efficient Neural Network Inference
      • Post training 4-bit quantization of convolutional networks for rapid-deployment
      • Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
      • Up or Down? Adaptive Rounding for Post-Training Quantization
      • Automated Log-Scale Quantization for Low-Cost Deep Neural Networks
      • BRECQ: Pushing the Limit of Post-Training Quantization by Block Reconstruction
      • Data-Free Quantization Through Weight Equalization and Bias Correction
      • Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks
      • Deep Learning with Limited Numerical Precision
      • Loss Aware Post-training Quantization
      • Learnable Companding Quantization for Accurate Low-bit Neural Networks
      • LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks
      • Learned Step Size Quantization
      • MQBench: Towards Reproducible and Deployable Model Quantization Benchmark
      • Improving Neural Network Quantization without Retraining using Outlier Channel Splitting
      • Trained quantization thresholds for accurate and efficient fixed-point inference of deep neural networks
      • ZeroQ: A Novel Zero Shot Quantization Framework
      • NoisyQuant: Noisy Bias-Enhanced Post-Training Activation Quantization for Vision Transformers
      • PACT: Parameterized Clipping Activation for Quantized Neural Networks
      • Quantization Applications
        • Post-training Quantization on Diffusion Models
        • SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
        • LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
        • Quantizable Transformers Removing Outliers by Helping Attention Heads Do Nothing
        • Q-DM: An Efficient Low-bit Quantized Diffusion Model
  • Unassorted
    • EsyMo: Scalable and Efficient Deep-Learning Inference on Asymmetric Mobile CPUs
    • Elf: Accelerate High-resolution Mobile Deep Vision with Content-aware Parallel Offloading
    • Flexible High-resolution Object Detection on Edge Devices with Tunable Latency
    • CoDL: Efficient CPU-GPU Co-execution for Deep Learning Inference on Mobile Devices
    • Melon: Breaking the Memory Wall for Resource-Efficient On-Device Machine Learning
    • Automatic heterogeneous quantization of deep neural networks for low-latency inference on the edge for particle detectors
    • Optical Flow Estimation using a Spatial Pyramid Network
    • Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
Codez
  • AWNAS
    • Mr.Chen validation exps
      • 2021/4/16 结果反馈与讨论记录
      • 2021/4/30结果反馈与讨论记录
      • 2021/4/9 会议记录
      • 2021/5/19结果反馈与讨论记录
      • First Charge
      • 2021/6/16结果反馈与讨论记录
      • 2021/6/17实验记录
      • 2021/6/3结果反馈与讨论记录
      • Comparation between BMXNet & AWNAS
      • recordings
    • aw_nas
      • btcs
        • layer2
          • bi_final_model.py
          • final_model.py
      • final
        • bnn_model.py
        • CNN_model.py
        • cnn_trainer.md
      • ops
        • bnn_ops.py
  • April Fool
    • Development Document
    • nics_fix_pytorch rEADING Record
    • Requirements
  • LLM Quantizaiton
    • LLM Quantizaiton 101
  • Point Cloud Processing
    • OpenPCDet
    • Kitti for 3D Detection
Languages
  • Python
    • Enum类
    • Argparse
    • defaultdict
    • logging
    • Python模块打包相关
    • tricks
    • 内置函数
    • Class
      • Python的杂货知识——Class相关
    • Decorators
      • staticmethod
    • Packets
      • ipdb
      • MatPlotLib 画图集锦!
      • Torch
        • ctx vs self
        • nn.Sequential vs nn.ModuleList
        • nn.avgpool2d
        • torch.cat
        • torch.ge、torch.gt、torch.le、torch.lt、torch.eq、torch.equal.
        • torch.nn.functional.pad
        • Pytorch的训练过程
    • Pytorch
      • Hooks
      • nn.embedding用法
System Implement
  • Envi Setup
    • Ubuntu Setup Instruction
    • 搭建主页的可贵尝试
    • 服务器环境配置(含awnas环境搭建)
    • Linux Related
      • BMXNet-V2 building record
      • tmux
  • MarkDown related
    • MarkDown中的分隔线样式
    • markdown代码块支持的语言
    • 创建MarkDown目录
  • Page Usage
    • Avatar Test
    • Code Blocks
    • Emoji Test
    • Fonts Test
    • Gist Test
    • Markdown Elements
    • Mathjax Test
    • Mentions Test
    • Mermaid Test
    • Toasts Card
    • Primer Utilities Test
  • Tool Box
    • Usage of Git
Backups
  • BNN Related Resources
  • Submission Notes
    • Hardware-Aware Efficient LLCV
  • Talks & Lectures
    • PPTs
    • Talks
      • 21/11/27网管讲计算摄影
      • 21/8/23商汤龚睿昊讲PTQ
      • 21/9/9商汤QAT talk
      • 2022_11_3
      • 2022/8/27 LLCV talk总结
      • ASP-DAC21 Tutorial
      • BNN in CVPRW21 0
      • Image Denoising - Not What You Think
    • Workshops
      • AIM2022记录
      • MAI2022记录
      • Valse 2021 底层视觉与图像处理Panel
      • Valse2021
  • ReadMes
    • Kaleido’s Personal Page
    • Kaleido’s Personal Page 2022!
    • Kaleido’s Personal Page 2023!
  • Study Record
    • 21/10/2 Learning Log
    • 21/8/26 Learning Log
    • 2022/10/17 TODOs
    • To Do List
    • 毕业论文记录
    • 素菜积累
Kaleido's Personal Page
  • articles
  • CV
  • Uncategorized
  • README.md

Uncategorized

  • ABPN: Adaptive Blend Pyramid Network for Real-Time Local Retouching of Ultra High-Resolution Photo
  • Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation
Next

2021-2024, UCaiJun Revision af8e214
Built with GitHub Pages using a theme provided by RunDocs.
Kaleido's Personal Page
master
GitHub
Homepage
Issues
Download

This Software is under the terms of MIT License.