mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2025-09-14 21:45:06 +00:00
Take some code from chainner to implement ESRGAN and other upscale models.
This commit is contained in:
351
comfy_extras/chainner_models/architecture/face/LICENSE-GFPGAN
Normal file
351
comfy_extras/chainner_models/architecture/face/LICENSE-GFPGAN
Normal file
@@ -0,0 +1,351 @@
|
||||
Tencent is pleased to support the open source community by making GFPGAN available.
|
||||
|
||||
Copyright (C) 2021 THL A29 Limited, a Tencent company. All rights reserved.
|
||||
|
||||
GFPGAN is licensed under the Apache License Version 2.0 except for the third-party components listed below.
|
||||
|
||||
|
||||
Terms of the Apache License Version 2.0:
|
||||
---------------------------------------------
|
||||
Apache License
|
||||
|
||||
Version 2.0, January 2004
|
||||
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
1. Definitions.
|
||||
|
||||
“License” shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
“Licensor” shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
|
||||
|
||||
“Legal Entity” shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
“You” (or “Your”) shall mean an individual or Legal Entity exercising permissions granted by this License.
|
||||
|
||||
“Source” form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
|
||||
|
||||
“Object” form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
|
||||
|
||||
“Work” shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
|
||||
|
||||
“Derivative Works” shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
|
||||
|
||||
“Contribution” shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, “submitted” means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as “Not a Contribution.”
|
||||
|
||||
“Contributor” shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
|
||||
|
||||
You must give any other recipients of the Work or Derivative Works a copy of this License; and
|
||||
|
||||
You must cause any modified files to carry prominent notices stating that You changed the files; and
|
||||
|
||||
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
|
||||
|
||||
If the Work includes a “NOTICE” text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
|
||||
|
||||
Other dependencies and licenses:
|
||||
|
||||
|
||||
Open Source Software licensed under the Apache 2.0 license and Other Licenses of the Third-Party Components therein:
|
||||
---------------------------------------------
|
||||
1. basicsr
|
||||
Copyright 2018-2020 BasicSR Authors
|
||||
|
||||
|
||||
This BasicSR project is released under the Apache 2.0 license.
|
||||
|
||||
A copy of Apache 2.0 is included in this file.
|
||||
|
||||
StyleGAN2
|
||||
The codes are modified from the repository stylegan2-pytorch. Many thanks to the author - Kim Seonghyeon 😊 for translating from the official TensorFlow codes to PyTorch ones. Here is the license of stylegan2-pytorch.
|
||||
The official repository is https://github.com/NVlabs/stylegan2, and here is the NVIDIA license.
|
||||
DFDNet
|
||||
The codes are largely modified from the repository DFDNet. Their license is Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
|
||||
|
||||
Terms of the Nvidia License:
|
||||
---------------------------------------------
|
||||
|
||||
1. Definitions
|
||||
|
||||
"Licensor" means any person or entity that distributes its Work.
|
||||
|
||||
"Software" means the original work of authorship made available under
|
||||
this License.
|
||||
|
||||
"Work" means the Software and any additions to or derivative works of
|
||||
the Software that are made available under this License.
|
||||
|
||||
"Nvidia Processors" means any central processing unit (CPU), graphics
|
||||
processing unit (GPU), field-programmable gate array (FPGA),
|
||||
application-specific integrated circuit (ASIC) or any combination
|
||||
thereof designed, made, sold, or provided by Nvidia or its affiliates.
|
||||
|
||||
The terms "reproduce," "reproduction," "derivative works," and
|
||||
"distribution" have the meaning as provided under U.S. copyright law;
|
||||
provided, however, that for the purposes of this License, derivative
|
||||
works shall not include works that remain separable from, or merely
|
||||
link (or bind by name) to the interfaces of, the Work.
|
||||
|
||||
Works, including the Software, are "made available" under this License
|
||||
by including in or with the Work either (a) a copyright notice
|
||||
referencing the applicability of this License to the Work, or (b) a
|
||||
copy of this License.
|
||||
|
||||
2. License Grants
|
||||
|
||||
2.1 Copyright Grant. Subject to the terms and conditions of this
|
||||
License, each Licensor grants to you a perpetual, worldwide,
|
||||
non-exclusive, royalty-free, copyright license to reproduce,
|
||||
prepare derivative works of, publicly display, publicly perform,
|
||||
sublicense and distribute its Work and any resulting derivative
|
||||
works in any form.
|
||||
|
||||
3. Limitations
|
||||
|
||||
3.1 Redistribution. You may reproduce or distribute the Work only
|
||||
if (a) you do so under this License, (b) you include a complete
|
||||
copy of this License with your distribution, and (c) you retain
|
||||
without modification any copyright, patent, trademark, or
|
||||
attribution notices that are present in the Work.
|
||||
|
||||
3.2 Derivative Works. You may specify that additional or different
|
||||
terms apply to the use, reproduction, and distribution of your
|
||||
derivative works of the Work ("Your Terms") only if (a) Your Terms
|
||||
provide that the use limitation in Section 3.3 applies to your
|
||||
derivative works, and (b) you identify the specific derivative
|
||||
works that are subject to Your Terms. Notwithstanding Your Terms,
|
||||
this License (including the redistribution requirements in Section
|
||||
3.1) will continue to apply to the Work itself.
|
||||
|
||||
3.3 Use Limitation. The Work and any derivative works thereof only
|
||||
may be used or intended for use non-commercially. The Work or
|
||||
derivative works thereof may be used or intended for use by Nvidia
|
||||
or its affiliates commercially or non-commercially. As used herein,
|
||||
"non-commercially" means for research or evaluation purposes only.
|
||||
|
||||
3.4 Patent Claims. If you bring or threaten to bring a patent claim
|
||||
against any Licensor (including any claim, cross-claim or
|
||||
counterclaim in a lawsuit) to enforce any patents that you allege
|
||||
are infringed by any Work, then your rights under this License from
|
||||
such Licensor (including the grants in Sections 2.1 and 2.2) will
|
||||
terminate immediately.
|
||||
|
||||
3.5 Trademarks. This License does not grant any rights to use any
|
||||
Licensor's or its affiliates' names, logos, or trademarks, except
|
||||
as necessary to reproduce the notices described in this License.
|
||||
|
||||
3.6 Termination. If you violate any term of this License, then your
|
||||
rights under this License (including the grants in Sections 2.1 and
|
||||
2.2) will terminate immediately.
|
||||
|
||||
4. Disclaimer of Warranty.
|
||||
|
||||
THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR
|
||||
NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER
|
||||
THIS LICENSE.
|
||||
|
||||
5. Limitation of Liability.
|
||||
|
||||
EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL
|
||||
THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE
|
||||
SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT,
|
||||
INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF
|
||||
OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK
|
||||
(INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION,
|
||||
LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER
|
||||
COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF
|
||||
THE POSSIBILITY OF SUCH DAMAGES.
|
||||
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2019 Kim Seonghyeon
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
||||
|
||||
|
||||
Open Source Software licensed under the BSD 3-Clause license:
|
||||
---------------------------------------------
|
||||
1. torchvision
|
||||
Copyright (c) Soumith Chintala 2016,
|
||||
All rights reserved.
|
||||
|
||||
2. torch
|
||||
Copyright (c) 2016- Facebook, Inc (Adam Paszke)
|
||||
Copyright (c) 2014- Facebook, Inc (Soumith Chintala)
|
||||
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
|
||||
Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
|
||||
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
|
||||
Copyright (c) 2011-2013 NYU (Clement Farabet)
|
||||
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
|
||||
Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
|
||||
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
|
||||
|
||||
|
||||
Terms of the BSD 3-Clause License:
|
||||
---------------------------------------------
|
||||
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
|
||||
|
||||
Open Source Software licensed under the BSD 3-Clause License and Other Licenses of the Third-Party Components therein:
|
||||
---------------------------------------------
|
||||
1. numpy
|
||||
Copyright (c) 2005-2020, NumPy Developers.
|
||||
All rights reserved.
|
||||
|
||||
A copy of BSD 3-Clause License is included in this file.
|
||||
|
||||
The NumPy repository and source distributions bundle several libraries that are
|
||||
compatibly licensed. We list these here.
|
||||
|
||||
Name: Numpydoc
|
||||
Files: doc/sphinxext/numpydoc/*
|
||||
License: BSD-2-Clause
|
||||
For details, see doc/sphinxext/LICENSE.txt
|
||||
|
||||
Name: scipy-sphinx-theme
|
||||
Files: doc/scipy-sphinx-theme/*
|
||||
License: BSD-3-Clause AND PSF-2.0 AND Apache-2.0
|
||||
For details, see doc/scipy-sphinx-theme/LICENSE.txt
|
||||
|
||||
Name: lapack-lite
|
||||
Files: numpy/linalg/lapack_lite/*
|
||||
License: BSD-3-Clause
|
||||
For details, see numpy/linalg/lapack_lite/LICENSE.txt
|
||||
|
||||
Name: tempita
|
||||
Files: tools/npy_tempita/*
|
||||
License: MIT
|
||||
For details, see tools/npy_tempita/license.txt
|
||||
|
||||
Name: dragon4
|
||||
Files: numpy/core/src/multiarray/dragon4.c
|
||||
License: MIT
|
||||
For license text, see numpy/core/src/multiarray/dragon4.c
|
||||
|
||||
|
||||
|
||||
Open Source Software licensed under the MIT license:
|
||||
---------------------------------------------
|
||||
1. facexlib
|
||||
Copyright (c) 2020 Xintao Wang
|
||||
|
||||
2. opencv-python
|
||||
Copyright (c) Olli-Pekka Heinisuo
|
||||
Please note that only files in cv2 package are used.
|
||||
|
||||
|
||||
Terms of the MIT License:
|
||||
---------------------------------------------
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
|
||||
|
||||
Open Source Software licensed under the MIT license and Other Licenses of the Third-Party Components therein:
|
||||
---------------------------------------------
|
||||
1. tqdm
|
||||
Copyright (c) 2013 noamraph
|
||||
|
||||
`tqdm` is a product of collaborative work.
|
||||
Unless otherwise stated, all authors (see commit logs) retain copyright
|
||||
for their respective work, and release the work under the MIT licence
|
||||
(text below).
|
||||
|
||||
Exceptions or notable authors are listed below
|
||||
in reverse chronological order:
|
||||
|
||||
* files: *
|
||||
MPLv2.0 2015-2020 (c) Casper da Costa-Luis
|
||||
[casperdcl](https://github.com/casperdcl).
|
||||
* files: tqdm/_tqdm.py
|
||||
MIT 2016 (c) [PR #96] on behalf of Google Inc.
|
||||
* files: tqdm/_tqdm.py setup.py README.rst MANIFEST.in .gitignore
|
||||
MIT 2013 (c) Noam Yorav-Raphael, original author.
|
||||
|
||||
[PR #96]: https://github.com/tqdm/tqdm/pull/96
|
||||
|
||||
|
||||
Mozilla Public Licence (MPL) v. 2.0 - Exhibit A
|
||||
-----------------------------------------------
|
||||
|
||||
This Source Code Form is subject to the terms of the
|
||||
Mozilla Public License, v. 2.0.
|
||||
If a copy of the MPL was not distributed with this file,
|
||||
You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
|
||||
|
||||
MIT License (MIT)
|
||||
-----------------
|
||||
|
||||
Copyright (c) 2013 noamraph
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
@@ -0,0 +1,351 @@
|
||||
Tencent is pleased to support the open source community by making GFPGAN available.
|
||||
|
||||
Copyright (C) 2021 THL A29 Limited, a Tencent company. All rights reserved.
|
||||
|
||||
GFPGAN is licensed under the Apache License Version 2.0 except for the third-party components listed below.
|
||||
|
||||
|
||||
Terms of the Apache License Version 2.0:
|
||||
---------------------------------------------
|
||||
Apache License
|
||||
|
||||
Version 2.0, January 2004
|
||||
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
1. Definitions.
|
||||
|
||||
“License” shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
“Licensor” shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
|
||||
|
||||
“Legal Entity” shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
“You” (or “Your”) shall mean an individual or Legal Entity exercising permissions granted by this License.
|
||||
|
||||
“Source” form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
|
||||
|
||||
“Object” form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
|
||||
|
||||
“Work” shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
|
||||
|
||||
“Derivative Works” shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
|
||||
|
||||
“Contribution” shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, “submitted” means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as “Not a Contribution.”
|
||||
|
||||
“Contributor” shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
|
||||
|
||||
You must give any other recipients of the Work or Derivative Works a copy of this License; and
|
||||
|
||||
You must cause any modified files to carry prominent notices stating that You changed the files; and
|
||||
|
||||
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
|
||||
|
||||
If the Work includes a “NOTICE” text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
|
||||
|
||||
Other dependencies and licenses:
|
||||
|
||||
|
||||
Open Source Software licensed under the Apache 2.0 license and Other Licenses of the Third-Party Components therein:
|
||||
---------------------------------------------
|
||||
1. basicsr
|
||||
Copyright 2018-2020 BasicSR Authors
|
||||
|
||||
|
||||
This BasicSR project is released under the Apache 2.0 license.
|
||||
|
||||
A copy of Apache 2.0 is included in this file.
|
||||
|
||||
StyleGAN2
|
||||
The codes are modified from the repository stylegan2-pytorch. Many thanks to the author - Kim Seonghyeon 😊 for translating from the official TensorFlow codes to PyTorch ones. Here is the license of stylegan2-pytorch.
|
||||
The official repository is https://github.com/NVlabs/stylegan2, and here is the NVIDIA license.
|
||||
DFDNet
|
||||
The codes are largely modified from the repository DFDNet. Their license is Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
|
||||
|
||||
Terms of the Nvidia License:
|
||||
---------------------------------------------
|
||||
|
||||
1. Definitions
|
||||
|
||||
"Licensor" means any person or entity that distributes its Work.
|
||||
|
||||
"Software" means the original work of authorship made available under
|
||||
this License.
|
||||
|
||||
"Work" means the Software and any additions to or derivative works of
|
||||
the Software that are made available under this License.
|
||||
|
||||
"Nvidia Processors" means any central processing unit (CPU), graphics
|
||||
processing unit (GPU), field-programmable gate array (FPGA),
|
||||
application-specific integrated circuit (ASIC) or any combination
|
||||
thereof designed, made, sold, or provided by Nvidia or its affiliates.
|
||||
|
||||
The terms "reproduce," "reproduction," "derivative works," and
|
||||
"distribution" have the meaning as provided under U.S. copyright law;
|
||||
provided, however, that for the purposes of this License, derivative
|
||||
works shall not include works that remain separable from, or merely
|
||||
link (or bind by name) to the interfaces of, the Work.
|
||||
|
||||
Works, including the Software, are "made available" under this License
|
||||
by including in or with the Work either (a) a copyright notice
|
||||
referencing the applicability of this License to the Work, or (b) a
|
||||
copy of this License.
|
||||
|
||||
2. License Grants
|
||||
|
||||
2.1 Copyright Grant. Subject to the terms and conditions of this
|
||||
License, each Licensor grants to you a perpetual, worldwide,
|
||||
non-exclusive, royalty-free, copyright license to reproduce,
|
||||
prepare derivative works of, publicly display, publicly perform,
|
||||
sublicense and distribute its Work and any resulting derivative
|
||||
works in any form.
|
||||
|
||||
3. Limitations
|
||||
|
||||
3.1 Redistribution. You may reproduce or distribute the Work only
|
||||
if (a) you do so under this License, (b) you include a complete
|
||||
copy of this License with your distribution, and (c) you retain
|
||||
without modification any copyright, patent, trademark, or
|
||||
attribution notices that are present in the Work.
|
||||
|
||||
3.2 Derivative Works. You may specify that additional or different
|
||||
terms apply to the use, reproduction, and distribution of your
|
||||
derivative works of the Work ("Your Terms") only if (a) Your Terms
|
||||
provide that the use limitation in Section 3.3 applies to your
|
||||
derivative works, and (b) you identify the specific derivative
|
||||
works that are subject to Your Terms. Notwithstanding Your Terms,
|
||||
this License (including the redistribution requirements in Section
|
||||
3.1) will continue to apply to the Work itself.
|
||||
|
||||
3.3 Use Limitation. The Work and any derivative works thereof only
|
||||
may be used or intended for use non-commercially. The Work or
|
||||
derivative works thereof may be used or intended for use by Nvidia
|
||||
or its affiliates commercially or non-commercially. As used herein,
|
||||
"non-commercially" means for research or evaluation purposes only.
|
||||
|
||||
3.4 Patent Claims. If you bring or threaten to bring a patent claim
|
||||
against any Licensor (including any claim, cross-claim or
|
||||
counterclaim in a lawsuit) to enforce any patents that you allege
|
||||
are infringed by any Work, then your rights under this License from
|
||||
such Licensor (including the grants in Sections 2.1 and 2.2) will
|
||||
terminate immediately.
|
||||
|
||||
3.5 Trademarks. This License does not grant any rights to use any
|
||||
Licensor's or its affiliates' names, logos, or trademarks, except
|
||||
as necessary to reproduce the notices described in this License.
|
||||
|
||||
3.6 Termination. If you violate any term of this License, then your
|
||||
rights under this License (including the grants in Sections 2.1 and
|
||||
2.2) will terminate immediately.
|
||||
|
||||
4. Disclaimer of Warranty.
|
||||
|
||||
THE WORK IS PROVIDED "AS IS" WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||||
KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OR CONDITIONS OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE OR
|
||||
NON-INFRINGEMENT. YOU BEAR THE RISK OF UNDERTAKING ANY ACTIVITIES UNDER
|
||||
THIS LICENSE.
|
||||
|
||||
5. Limitation of Liability.
|
||||
|
||||
EXCEPT AS PROHIBITED BY APPLICABLE LAW, IN NO EVENT AND UNDER NO LEGAL
|
||||
THEORY, WHETHER IN TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE
|
||||
SHALL ANY LICENSOR BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY DIRECT,
|
||||
INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF
|
||||
OR RELATED TO THIS LICENSE, THE USE OR INABILITY TO USE THE WORK
|
||||
(INCLUDING BUT NOT LIMITED TO LOSS OF GOODWILL, BUSINESS INTERRUPTION,
|
||||
LOST PROFITS OR DATA, COMPUTER FAILURE OR MALFUNCTION, OR ANY OTHER
|
||||
COMMERCIAL DAMAGES OR LOSSES), EVEN IF THE LICENSOR HAS BEEN ADVISED OF
|
||||
THE POSSIBILITY OF SUCH DAMAGES.
|
||||
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2019 Kim Seonghyeon
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
||||
|
||||
|
||||
Open Source Software licensed under the BSD 3-Clause license:
|
||||
---------------------------------------------
|
||||
1. torchvision
|
||||
Copyright (c) Soumith Chintala 2016,
|
||||
All rights reserved.
|
||||
|
||||
2. torch
|
||||
Copyright (c) 2016- Facebook, Inc (Adam Paszke)
|
||||
Copyright (c) 2014- Facebook, Inc (Soumith Chintala)
|
||||
Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)
|
||||
Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)
|
||||
Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)
|
||||
Copyright (c) 2011-2013 NYU (Clement Farabet)
|
||||
Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)
|
||||
Copyright (c) 2006 Idiap Research Institute (Samy Bengio)
|
||||
Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)
|
||||
|
||||
|
||||
Terms of the BSD 3-Clause License:
|
||||
---------------------------------------------
|
||||
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
|
||||
|
||||
Open Source Software licensed under the BSD 3-Clause License and Other Licenses of the Third-Party Components therein:
|
||||
---------------------------------------------
|
||||
1. numpy
|
||||
Copyright (c) 2005-2020, NumPy Developers.
|
||||
All rights reserved.
|
||||
|
||||
A copy of BSD 3-Clause License is included in this file.
|
||||
|
||||
The NumPy repository and source distributions bundle several libraries that are
|
||||
compatibly licensed. We list these here.
|
||||
|
||||
Name: Numpydoc
|
||||
Files: doc/sphinxext/numpydoc/*
|
||||
License: BSD-2-Clause
|
||||
For details, see doc/sphinxext/LICENSE.txt
|
||||
|
||||
Name: scipy-sphinx-theme
|
||||
Files: doc/scipy-sphinx-theme/*
|
||||
License: BSD-3-Clause AND PSF-2.0 AND Apache-2.0
|
||||
For details, see doc/scipy-sphinx-theme/LICENSE.txt
|
||||
|
||||
Name: lapack-lite
|
||||
Files: numpy/linalg/lapack_lite/*
|
||||
License: BSD-3-Clause
|
||||
For details, see numpy/linalg/lapack_lite/LICENSE.txt
|
||||
|
||||
Name: tempita
|
||||
Files: tools/npy_tempita/*
|
||||
License: MIT
|
||||
For details, see tools/npy_tempita/license.txt
|
||||
|
||||
Name: dragon4
|
||||
Files: numpy/core/src/multiarray/dragon4.c
|
||||
License: MIT
|
||||
For license text, see numpy/core/src/multiarray/dragon4.c
|
||||
|
||||
|
||||
|
||||
Open Source Software licensed under the MIT license:
|
||||
---------------------------------------------
|
||||
1. facexlib
|
||||
Copyright (c) 2020 Xintao Wang
|
||||
|
||||
2. opencv-python
|
||||
Copyright (c) Olli-Pekka Heinisuo
|
||||
Please note that only files in cv2 package are used.
|
||||
|
||||
|
||||
Terms of the MIT License:
|
||||
---------------------------------------------
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
|
||||
|
||||
Open Source Software licensed under the MIT license and Other Licenses of the Third-Party Components therein:
|
||||
---------------------------------------------
|
||||
1. tqdm
|
||||
Copyright (c) 2013 noamraph
|
||||
|
||||
`tqdm` is a product of collaborative work.
|
||||
Unless otherwise stated, all authors (see commit logs) retain copyright
|
||||
for their respective work, and release the work under the MIT licence
|
||||
(text below).
|
||||
|
||||
Exceptions or notable authors are listed below
|
||||
in reverse chronological order:
|
||||
|
||||
* files: *
|
||||
MPLv2.0 2015-2020 (c) Casper da Costa-Luis
|
||||
[casperdcl](https://github.com/casperdcl).
|
||||
* files: tqdm/_tqdm.py
|
||||
MIT 2016 (c) [PR #96] on behalf of Google Inc.
|
||||
* files: tqdm/_tqdm.py setup.py README.rst MANIFEST.in .gitignore
|
||||
MIT 2013 (c) Noam Yorav-Raphael, original author.
|
||||
|
||||
[PR #96]: https://github.com/tqdm/tqdm/pull/96
|
||||
|
||||
|
||||
Mozilla Public Licence (MPL) v. 2.0 - Exhibit A
|
||||
-----------------------------------------------
|
||||
|
||||
This Source Code Form is subject to the terms of the
|
||||
Mozilla Public License, v. 2.0.
|
||||
If a copy of the MPL was not distributed with this file,
|
||||
You can obtain one at https://mozilla.org/MPL/2.0/.
|
||||
|
||||
|
||||
MIT License (MIT)
|
||||
-----------------
|
||||
|
||||
Copyright (c) 2013 noamraph
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
@@ -0,0 +1,35 @@
|
||||
S-Lab License 1.0
|
||||
|
||||
Copyright 2022 S-Lab
|
||||
|
||||
Redistribution and use for non-commercial purpose in source and
|
||||
binary forms, with or without modification, are permitted provided
|
||||
that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in
|
||||
the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its
|
||||
contributors may be used to endorse or promote products derived
|
||||
from this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
In the event that redistribution and/or use for commercial purpose in
|
||||
source or binary forms, with or without modification is required,
|
||||
please contact the contributor(s) of the work.
|
265
comfy_extras/chainner_models/architecture/face/arcface_arch.py
Normal file
265
comfy_extras/chainner_models/architecture/face/arcface_arch.py
Normal file
@@ -0,0 +1,265 @@
|
||||
import torch.nn as nn
|
||||
|
||||
|
||||
def conv3x3(inplanes, outplanes, stride=1):
|
||||
"""A simple wrapper for 3x3 convolution with padding.
|
||||
|
||||
Args:
|
||||
inplanes (int): Channel number of inputs.
|
||||
outplanes (int): Channel number of outputs.
|
||||
stride (int): Stride in convolution. Default: 1.
|
||||
"""
|
||||
return nn.Conv2d(
|
||||
inplanes, outplanes, kernel_size=3, stride=stride, padding=1, bias=False
|
||||
)
|
||||
|
||||
|
||||
class BasicBlock(nn.Module):
|
||||
"""Basic residual block used in the ResNetArcFace architecture.
|
||||
|
||||
Args:
|
||||
inplanes (int): Channel number of inputs.
|
||||
planes (int): Channel number of outputs.
|
||||
stride (int): Stride in convolution. Default: 1.
|
||||
downsample (nn.Module): The downsample module. Default: None.
|
||||
"""
|
||||
|
||||
expansion = 1 # output channel expansion ratio
|
||||
|
||||
def __init__(self, inplanes, planes, stride=1, downsample=None):
|
||||
super(BasicBlock, self).__init__()
|
||||
self.conv1 = conv3x3(inplanes, planes, stride)
|
||||
self.bn1 = nn.BatchNorm2d(planes)
|
||||
self.relu = nn.ReLU(inplace=True)
|
||||
self.conv2 = conv3x3(planes, planes)
|
||||
self.bn2 = nn.BatchNorm2d(planes)
|
||||
self.downsample = downsample
|
||||
self.stride = stride
|
||||
|
||||
def forward(self, x):
|
||||
residual = x
|
||||
|
||||
out = self.conv1(x)
|
||||
out = self.bn1(out)
|
||||
out = self.relu(out)
|
||||
|
||||
out = self.conv2(out)
|
||||
out = self.bn2(out)
|
||||
|
||||
if self.downsample is not None:
|
||||
residual = self.downsample(x)
|
||||
|
||||
out += residual
|
||||
out = self.relu(out)
|
||||
|
||||
return out
|
||||
|
||||
|
||||
class IRBlock(nn.Module):
|
||||
"""Improved residual block (IR Block) used in the ResNetArcFace architecture.
|
||||
|
||||
Args:
|
||||
inplanes (int): Channel number of inputs.
|
||||
planes (int): Channel number of outputs.
|
||||
stride (int): Stride in convolution. Default: 1.
|
||||
downsample (nn.Module): The downsample module. Default: None.
|
||||
use_se (bool): Whether use the SEBlock (squeeze and excitation block). Default: True.
|
||||
"""
|
||||
|
||||
expansion = 1 # output channel expansion ratio
|
||||
|
||||
def __init__(self, inplanes, planes, stride=1, downsample=None, use_se=True):
|
||||
super(IRBlock, self).__init__()
|
||||
self.bn0 = nn.BatchNorm2d(inplanes)
|
||||
self.conv1 = conv3x3(inplanes, inplanes)
|
||||
self.bn1 = nn.BatchNorm2d(inplanes)
|
||||
self.prelu = nn.PReLU()
|
||||
self.conv2 = conv3x3(inplanes, planes, stride)
|
||||
self.bn2 = nn.BatchNorm2d(planes)
|
||||
self.downsample = downsample
|
||||
self.stride = stride
|
||||
self.use_se = use_se
|
||||
if self.use_se:
|
||||
self.se = SEBlock(planes)
|
||||
|
||||
def forward(self, x):
|
||||
residual = x
|
||||
out = self.bn0(x)
|
||||
out = self.conv1(out)
|
||||
out = self.bn1(out)
|
||||
out = self.prelu(out)
|
||||
|
||||
out = self.conv2(out)
|
||||
out = self.bn2(out)
|
||||
if self.use_se:
|
||||
out = self.se(out)
|
||||
|
||||
if self.downsample is not None:
|
||||
residual = self.downsample(x)
|
||||
|
||||
out += residual
|
||||
out = self.prelu(out)
|
||||
|
||||
return out
|
||||
|
||||
|
||||
class Bottleneck(nn.Module):
|
||||
"""Bottleneck block used in the ResNetArcFace architecture.
|
||||
|
||||
Args:
|
||||
inplanes (int): Channel number of inputs.
|
||||
planes (int): Channel number of outputs.
|
||||
stride (int): Stride in convolution. Default: 1.
|
||||
downsample (nn.Module): The downsample module. Default: None.
|
||||
"""
|
||||
|
||||
expansion = 4 # output channel expansion ratio
|
||||
|
||||
def __init__(self, inplanes, planes, stride=1, downsample=None):
|
||||
super(Bottleneck, self).__init__()
|
||||
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
|
||||
self.bn1 = nn.BatchNorm2d(planes)
|
||||
self.conv2 = nn.Conv2d(
|
||||
planes, planes, kernel_size=3, stride=stride, padding=1, bias=False
|
||||
)
|
||||
self.bn2 = nn.BatchNorm2d(planes)
|
||||
self.conv3 = nn.Conv2d(
|
||||
planes, planes * self.expansion, kernel_size=1, bias=False
|
||||
)
|
||||
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
|
||||
self.relu = nn.ReLU(inplace=True)
|
||||
self.downsample = downsample
|
||||
self.stride = stride
|
||||
|
||||
def forward(self, x):
|
||||
residual = x
|
||||
|
||||
out = self.conv1(x)
|
||||
out = self.bn1(out)
|
||||
out = self.relu(out)
|
||||
|
||||
out = self.conv2(out)
|
||||
out = self.bn2(out)
|
||||
out = self.relu(out)
|
||||
|
||||
out = self.conv3(out)
|
||||
out = self.bn3(out)
|
||||
|
||||
if self.downsample is not None:
|
||||
residual = self.downsample(x)
|
||||
|
||||
out += residual
|
||||
out = self.relu(out)
|
||||
|
||||
return out
|
||||
|
||||
|
||||
class SEBlock(nn.Module):
|
||||
"""The squeeze-and-excitation block (SEBlock) used in the IRBlock.
|
||||
|
||||
Args:
|
||||
channel (int): Channel number of inputs.
|
||||
reduction (int): Channel reduction ration. Default: 16.
|
||||
"""
|
||||
|
||||
def __init__(self, channel, reduction=16):
|
||||
super(SEBlock, self).__init__()
|
||||
self.avg_pool = nn.AdaptiveAvgPool2d(
|
||||
1
|
||||
) # pool to 1x1 without spatial information
|
||||
self.fc = nn.Sequential(
|
||||
nn.Linear(channel, channel // reduction),
|
||||
nn.PReLU(),
|
||||
nn.Linear(channel // reduction, channel),
|
||||
nn.Sigmoid(),
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
b, c, _, _ = x.size()
|
||||
y = self.avg_pool(x).view(b, c)
|
||||
y = self.fc(y).view(b, c, 1, 1)
|
||||
return x * y
|
||||
|
||||
|
||||
class ResNetArcFace(nn.Module):
|
||||
"""ArcFace with ResNet architectures.
|
||||
|
||||
Ref: ArcFace: Additive Angular Margin Loss for Deep Face Recognition.
|
||||
|
||||
Args:
|
||||
block (str): Block used in the ArcFace architecture.
|
||||
layers (tuple(int)): Block numbers in each layer.
|
||||
use_se (bool): Whether use the SEBlock (squeeze and excitation block). Default: True.
|
||||
"""
|
||||
|
||||
def __init__(self, block, layers, use_se=True):
|
||||
if block == "IRBlock":
|
||||
block = IRBlock
|
||||
self.inplanes = 64
|
||||
self.use_se = use_se
|
||||
super(ResNetArcFace, self).__init__()
|
||||
|
||||
self.conv1 = nn.Conv2d(1, 64, kernel_size=3, padding=1, bias=False)
|
||||
self.bn1 = nn.BatchNorm2d(64)
|
||||
self.prelu = nn.PReLU()
|
||||
self.maxpool = nn.MaxPool2d(kernel_size=2, stride=2)
|
||||
self.layer1 = self._make_layer(block, 64, layers[0])
|
||||
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
|
||||
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
|
||||
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
|
||||
self.bn4 = nn.BatchNorm2d(512)
|
||||
self.dropout = nn.Dropout()
|
||||
self.fc5 = nn.Linear(512 * 8 * 8, 512)
|
||||
self.bn5 = nn.BatchNorm1d(512)
|
||||
|
||||
# initialization
|
||||
for m in self.modules():
|
||||
if isinstance(m, nn.Conv2d):
|
||||
nn.init.xavier_normal_(m.weight)
|
||||
elif isinstance(m, nn.BatchNorm2d) or isinstance(m, nn.BatchNorm1d):
|
||||
nn.init.constant_(m.weight, 1)
|
||||
nn.init.constant_(m.bias, 0)
|
||||
elif isinstance(m, nn.Linear):
|
||||
nn.init.xavier_normal_(m.weight)
|
||||
nn.init.constant_(m.bias, 0)
|
||||
|
||||
def _make_layer(self, block, planes, num_blocks, stride=1):
|
||||
downsample = None
|
||||
if stride != 1 or self.inplanes != planes * block.expansion:
|
||||
downsample = nn.Sequential(
|
||||
nn.Conv2d(
|
||||
self.inplanes,
|
||||
planes * block.expansion,
|
||||
kernel_size=1,
|
||||
stride=stride,
|
||||
bias=False,
|
||||
),
|
||||
nn.BatchNorm2d(planes * block.expansion),
|
||||
)
|
||||
layers = []
|
||||
layers.append(
|
||||
block(self.inplanes, planes, stride, downsample, use_se=self.use_se)
|
||||
)
|
||||
self.inplanes = planes
|
||||
for _ in range(1, num_blocks):
|
||||
layers.append(block(self.inplanes, planes, use_se=self.use_se))
|
||||
|
||||
return nn.Sequential(*layers)
|
||||
|
||||
def forward(self, x):
|
||||
x = self.conv1(x)
|
||||
x = self.bn1(x)
|
||||
x = self.prelu(x)
|
||||
x = self.maxpool(x)
|
||||
|
||||
x = self.layer1(x)
|
||||
x = self.layer2(x)
|
||||
x = self.layer3(x)
|
||||
x = self.layer4(x)
|
||||
x = self.bn4(x)
|
||||
x = self.dropout(x)
|
||||
x = x.view(x.size(0), -1)
|
||||
x = self.fc5(x)
|
||||
x = self.bn5(x)
|
||||
|
||||
return x
|
790
comfy_extras/chainner_models/architecture/face/codeformer.py
Normal file
790
comfy_extras/chainner_models/architecture/face/codeformer.py
Normal file
@@ -0,0 +1,790 @@
|
||||
"""
|
||||
Modified from https://github.com/sczhou/CodeFormer
|
||||
VQGAN code, adapted from the original created by the Unleashing Transformers authors:
|
||||
https://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py
|
||||
This verison of the arch specifically was gathered from an old version of GFPGAN. If this is a problem, please contact me.
|
||||
"""
|
||||
import math
|
||||
from typing import Optional
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
import logging as logger
|
||||
from torch import Tensor
|
||||
|
||||
|
||||
class VectorQuantizer(nn.Module):
|
||||
def __init__(self, codebook_size, emb_dim, beta):
|
||||
super(VectorQuantizer, self).__init__()
|
||||
self.codebook_size = codebook_size # number of embeddings
|
||||
self.emb_dim = emb_dim # dimension of embedding
|
||||
self.beta = beta # commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2
|
||||
self.embedding = nn.Embedding(self.codebook_size, self.emb_dim)
|
||||
self.embedding.weight.data.uniform_(
|
||||
-1.0 / self.codebook_size, 1.0 / self.codebook_size
|
||||
)
|
||||
|
||||
def forward(self, z):
|
||||
# reshape z -> (batch, height, width, channel) and flatten
|
||||
z = z.permute(0, 2, 3, 1).contiguous()
|
||||
z_flattened = z.view(-1, self.emb_dim)
|
||||
|
||||
# distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z
|
||||
d = (
|
||||
(z_flattened**2).sum(dim=1, keepdim=True)
|
||||
+ (self.embedding.weight**2).sum(1)
|
||||
- 2 * torch.matmul(z_flattened, self.embedding.weight.t())
|
||||
)
|
||||
|
||||
mean_distance = torch.mean(d)
|
||||
# find closest encodings
|
||||
# min_encoding_indices = torch.argmin(d, dim=1).unsqueeze(1)
|
||||
min_encoding_scores, min_encoding_indices = torch.topk(
|
||||
d, 1, dim=1, largest=False
|
||||
)
|
||||
# [0-1], higher score, higher confidence
|
||||
min_encoding_scores = torch.exp(-min_encoding_scores / 10)
|
||||
|
||||
min_encodings = torch.zeros(
|
||||
min_encoding_indices.shape[0], self.codebook_size
|
||||
).to(z)
|
||||
min_encodings.scatter_(1, min_encoding_indices, 1)
|
||||
|
||||
# get quantized latent vectors
|
||||
z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape)
|
||||
# compute loss for embedding
|
||||
loss = torch.mean((z_q.detach() - z) ** 2) + self.beta * torch.mean(
|
||||
(z_q - z.detach()) ** 2
|
||||
)
|
||||
# preserve gradients
|
||||
z_q = z + (z_q - z).detach()
|
||||
|
||||
# perplexity
|
||||
e_mean = torch.mean(min_encodings, dim=0)
|
||||
perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10)))
|
||||
# reshape back to match original input shape
|
||||
z_q = z_q.permute(0, 3, 1, 2).contiguous()
|
||||
|
||||
return (
|
||||
z_q,
|
||||
loss,
|
||||
{
|
||||
"perplexity": perplexity,
|
||||
"min_encodings": min_encodings,
|
||||
"min_encoding_indices": min_encoding_indices,
|
||||
"min_encoding_scores": min_encoding_scores,
|
||||
"mean_distance": mean_distance,
|
||||
},
|
||||
)
|
||||
|
||||
def get_codebook_feat(self, indices, shape):
|
||||
# input indices: batch*token_num -> (batch*token_num)*1
|
||||
# shape: batch, height, width, channel
|
||||
indices = indices.view(-1, 1)
|
||||
min_encodings = torch.zeros(indices.shape[0], self.codebook_size).to(indices)
|
||||
min_encodings.scatter_(1, indices, 1)
|
||||
# get quantized latent vectors
|
||||
z_q = torch.matmul(min_encodings.float(), self.embedding.weight)
|
||||
|
||||
if shape is not None: # reshape back to match original input shape
|
||||
z_q = z_q.view(shape).permute(0, 3, 1, 2).contiguous()
|
||||
|
||||
return z_q
|
||||
|
||||
|
||||
class GumbelQuantizer(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
codebook_size,
|
||||
emb_dim,
|
||||
num_hiddens,
|
||||
straight_through=False,
|
||||
kl_weight=5e-4,
|
||||
temp_init=1.0,
|
||||
):
|
||||
super().__init__()
|
||||
self.codebook_size = codebook_size # number of embeddings
|
||||
self.emb_dim = emb_dim # dimension of embedding
|
||||
self.straight_through = straight_through
|
||||
self.temperature = temp_init
|
||||
self.kl_weight = kl_weight
|
||||
self.proj = nn.Conv2d(
|
||||
num_hiddens, codebook_size, 1
|
||||
) # projects last encoder layer to quantized logits
|
||||
self.embed = nn.Embedding(codebook_size, emb_dim)
|
||||
|
||||
def forward(self, z):
|
||||
hard = self.straight_through if self.training else True
|
||||
|
||||
logits = self.proj(z)
|
||||
|
||||
soft_one_hot = F.gumbel_softmax(logits, tau=self.temperature, dim=1, hard=hard)
|
||||
|
||||
z_q = torch.einsum("b n h w, n d -> b d h w", soft_one_hot, self.embed.weight)
|
||||
|
||||
# + kl divergence to the prior loss
|
||||
qy = F.softmax(logits, dim=1)
|
||||
diff = (
|
||||
self.kl_weight
|
||||
* torch.sum(qy * torch.log(qy * self.codebook_size + 1e-10), dim=1).mean()
|
||||
)
|
||||
min_encoding_indices = soft_one_hot.argmax(dim=1)
|
||||
|
||||
return z_q, diff, {"min_encoding_indices": min_encoding_indices}
|
||||
|
||||
|
||||
class Downsample(nn.Module):
|
||||
def __init__(self, in_channels):
|
||||
super().__init__()
|
||||
self.conv = torch.nn.Conv2d(
|
||||
in_channels, in_channels, kernel_size=3, stride=2, padding=0
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
pad = (0, 1, 0, 1)
|
||||
x = torch.nn.functional.pad(x, pad, mode="constant", value=0)
|
||||
x = self.conv(x)
|
||||
return x
|
||||
|
||||
|
||||
class Upsample(nn.Module):
|
||||
def __init__(self, in_channels):
|
||||
super().__init__()
|
||||
self.conv = nn.Conv2d(
|
||||
in_channels, in_channels, kernel_size=3, stride=1, padding=1
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
x = F.interpolate(x, scale_factor=2.0, mode="nearest")
|
||||
x = self.conv(x)
|
||||
|
||||
return x
|
||||
|
||||
|
||||
class AttnBlock(nn.Module):
|
||||
def __init__(self, in_channels):
|
||||
super().__init__()
|
||||
self.in_channels = in_channels
|
||||
|
||||
self.norm = normalize(in_channels)
|
||||
self.q = torch.nn.Conv2d(
|
||||
in_channels, in_channels, kernel_size=1, stride=1, padding=0
|
||||
)
|
||||
self.k = torch.nn.Conv2d(
|
||||
in_channels, in_channels, kernel_size=1, stride=1, padding=0
|
||||
)
|
||||
self.v = torch.nn.Conv2d(
|
||||
in_channels, in_channels, kernel_size=1, stride=1, padding=0
|
||||
)
|
||||
self.proj_out = torch.nn.Conv2d(
|
||||
in_channels, in_channels, kernel_size=1, stride=1, padding=0
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
h_ = x
|
||||
h_ = self.norm(h_)
|
||||
q = self.q(h_)
|
||||
k = self.k(h_)
|
||||
v = self.v(h_)
|
||||
|
||||
# compute attention
|
||||
b, c, h, w = q.shape
|
||||
q = q.reshape(b, c, h * w)
|
||||
q = q.permute(0, 2, 1)
|
||||
k = k.reshape(b, c, h * w)
|
||||
w_ = torch.bmm(q, k)
|
||||
w_ = w_ * (int(c) ** (-0.5))
|
||||
w_ = F.softmax(w_, dim=2)
|
||||
|
||||
# attend to values
|
||||
v = v.reshape(b, c, h * w)
|
||||
w_ = w_.permute(0, 2, 1)
|
||||
h_ = torch.bmm(v, w_)
|
||||
h_ = h_.reshape(b, c, h, w)
|
||||
|
||||
h_ = self.proj_out(h_)
|
||||
|
||||
return x + h_
|
||||
|
||||
|
||||
class Encoder(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
nf,
|
||||
out_channels,
|
||||
ch_mult,
|
||||
num_res_blocks,
|
||||
resolution,
|
||||
attn_resolutions,
|
||||
):
|
||||
super().__init__()
|
||||
self.nf = nf
|
||||
self.num_resolutions = len(ch_mult)
|
||||
self.num_res_blocks = num_res_blocks
|
||||
self.resolution = resolution
|
||||
self.attn_resolutions = attn_resolutions
|
||||
|
||||
curr_res = self.resolution
|
||||
in_ch_mult = (1,) + tuple(ch_mult)
|
||||
|
||||
blocks = []
|
||||
# initial convultion
|
||||
blocks.append(nn.Conv2d(in_channels, nf, kernel_size=3, stride=1, padding=1))
|
||||
|
||||
# residual and downsampling blocks, with attention on smaller res (16x16)
|
||||
for i in range(self.num_resolutions):
|
||||
block_in_ch = nf * in_ch_mult[i]
|
||||
block_out_ch = nf * ch_mult[i]
|
||||
for _ in range(self.num_res_blocks):
|
||||
blocks.append(ResBlock(block_in_ch, block_out_ch))
|
||||
block_in_ch = block_out_ch
|
||||
if curr_res in attn_resolutions:
|
||||
blocks.append(AttnBlock(block_in_ch))
|
||||
|
||||
if i != self.num_resolutions - 1:
|
||||
blocks.append(Downsample(block_in_ch))
|
||||
curr_res = curr_res // 2
|
||||
|
||||
# non-local attention block
|
||||
blocks.append(ResBlock(block_in_ch, block_in_ch)) # type: ignore
|
||||
blocks.append(AttnBlock(block_in_ch)) # type: ignore
|
||||
blocks.append(ResBlock(block_in_ch, block_in_ch)) # type: ignore
|
||||
|
||||
# normalise and convert to latent size
|
||||
blocks.append(normalize(block_in_ch)) # type: ignore
|
||||
blocks.append(
|
||||
nn.Conv2d(block_in_ch, out_channels, kernel_size=3, stride=1, padding=1) # type: ignore
|
||||
)
|
||||
self.blocks = nn.ModuleList(blocks)
|
||||
|
||||
def forward(self, x):
|
||||
for block in self.blocks:
|
||||
x = block(x)
|
||||
|
||||
return x
|
||||
|
||||
|
||||
class Generator(nn.Module):
|
||||
def __init__(self, nf, ch_mult, res_blocks, img_size, attn_resolutions, emb_dim):
|
||||
super().__init__()
|
||||
self.nf = nf
|
||||
self.ch_mult = ch_mult
|
||||
self.num_resolutions = len(self.ch_mult)
|
||||
self.num_res_blocks = res_blocks
|
||||
self.resolution = img_size
|
||||
self.attn_resolutions = attn_resolutions
|
||||
self.in_channels = emb_dim
|
||||
self.out_channels = 3
|
||||
block_in_ch = self.nf * self.ch_mult[-1]
|
||||
curr_res = self.resolution // 2 ** (self.num_resolutions - 1)
|
||||
|
||||
blocks = []
|
||||
# initial conv
|
||||
blocks.append(
|
||||
nn.Conv2d(self.in_channels, block_in_ch, kernel_size=3, stride=1, padding=1)
|
||||
)
|
||||
|
||||
# non-local attention block
|
||||
blocks.append(ResBlock(block_in_ch, block_in_ch))
|
||||
blocks.append(AttnBlock(block_in_ch))
|
||||
blocks.append(ResBlock(block_in_ch, block_in_ch))
|
||||
|
||||
for i in reversed(range(self.num_resolutions)):
|
||||
block_out_ch = self.nf * self.ch_mult[i]
|
||||
|
||||
for _ in range(self.num_res_blocks):
|
||||
blocks.append(ResBlock(block_in_ch, block_out_ch))
|
||||
block_in_ch = block_out_ch
|
||||
|
||||
if curr_res in self.attn_resolutions:
|
||||
blocks.append(AttnBlock(block_in_ch))
|
||||
|
||||
if i != 0:
|
||||
blocks.append(Upsample(block_in_ch))
|
||||
curr_res = curr_res * 2
|
||||
|
||||
blocks.append(normalize(block_in_ch))
|
||||
blocks.append(
|
||||
nn.Conv2d(
|
||||
block_in_ch, self.out_channels, kernel_size=3, stride=1, padding=1
|
||||
)
|
||||
)
|
||||
|
||||
self.blocks = nn.ModuleList(blocks)
|
||||
|
||||
def forward(self, x):
|
||||
for block in self.blocks:
|
||||
x = block(x)
|
||||
|
||||
return x
|
||||
|
||||
|
||||
class VQAutoEncoder(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
img_size,
|
||||
nf,
|
||||
ch_mult,
|
||||
quantizer="nearest",
|
||||
res_blocks=2,
|
||||
attn_resolutions=[16],
|
||||
codebook_size=1024,
|
||||
emb_dim=256,
|
||||
beta=0.25,
|
||||
gumbel_straight_through=False,
|
||||
gumbel_kl_weight=1e-8,
|
||||
model_path=None,
|
||||
):
|
||||
super().__init__()
|
||||
self.in_channels = 3
|
||||
self.nf = nf
|
||||
self.n_blocks = res_blocks
|
||||
self.codebook_size = codebook_size
|
||||
self.embed_dim = emb_dim
|
||||
self.ch_mult = ch_mult
|
||||
self.resolution = img_size
|
||||
self.attn_resolutions = attn_resolutions
|
||||
self.quantizer_type = quantizer
|
||||
self.encoder = Encoder(
|
||||
self.in_channels,
|
||||
self.nf,
|
||||
self.embed_dim,
|
||||
self.ch_mult,
|
||||
self.n_blocks,
|
||||
self.resolution,
|
||||
self.attn_resolutions,
|
||||
)
|
||||
if self.quantizer_type == "nearest":
|
||||
self.beta = beta # 0.25
|
||||
self.quantize = VectorQuantizer(
|
||||
self.codebook_size, self.embed_dim, self.beta
|
||||
)
|
||||
elif self.quantizer_type == "gumbel":
|
||||
self.gumbel_num_hiddens = emb_dim
|
||||
self.straight_through = gumbel_straight_through
|
||||
self.kl_weight = gumbel_kl_weight
|
||||
self.quantize = GumbelQuantizer(
|
||||
self.codebook_size,
|
||||
self.embed_dim,
|
||||
self.gumbel_num_hiddens,
|
||||
self.straight_through,
|
||||
self.kl_weight,
|
||||
)
|
||||
self.generator = Generator(
|
||||
nf, ch_mult, res_blocks, img_size, attn_resolutions, emb_dim
|
||||
)
|
||||
|
||||
if model_path is not None:
|
||||
chkpt = torch.load(model_path, map_location="cpu")
|
||||
if "params_ema" in chkpt:
|
||||
self.load_state_dict(
|
||||
torch.load(model_path, map_location="cpu")["params_ema"]
|
||||
)
|
||||
logger.info(f"vqgan is loaded from: {model_path} [params_ema]")
|
||||
elif "params" in chkpt:
|
||||
self.load_state_dict(
|
||||
torch.load(model_path, map_location="cpu")["params"]
|
||||
)
|
||||
logger.info(f"vqgan is loaded from: {model_path} [params]")
|
||||
else:
|
||||
raise ValueError("Wrong params!")
|
||||
|
||||
def forward(self, x):
|
||||
x = self.encoder(x)
|
||||
quant, codebook_loss, quant_stats = self.quantize(x)
|
||||
x = self.generator(quant)
|
||||
return x, codebook_loss, quant_stats
|
||||
|
||||
|
||||
def calc_mean_std(feat, eps=1e-5):
|
||||
"""Calculate mean and std for adaptive_instance_normalization.
|
||||
Args:
|
||||
feat (Tensor): 4D tensor.
|
||||
eps (float): A small value added to the variance to avoid
|
||||
divide-by-zero. Default: 1e-5.
|
||||
"""
|
||||
size = feat.size()
|
||||
assert len(size) == 4, "The input feature should be 4D tensor."
|
||||
b, c = size[:2]
|
||||
feat_var = feat.view(b, c, -1).var(dim=2) + eps
|
||||
feat_std = feat_var.sqrt().view(b, c, 1, 1)
|
||||
feat_mean = feat.view(b, c, -1).mean(dim=2).view(b, c, 1, 1)
|
||||
return feat_mean, feat_std
|
||||
|
||||
|
||||
def adaptive_instance_normalization(content_feat, style_feat):
|
||||
"""Adaptive instance normalization.
|
||||
Adjust the reference features to have the similar color and illuminations
|
||||
as those in the degradate features.
|
||||
Args:
|
||||
content_feat (Tensor): The reference feature.
|
||||
style_feat (Tensor): The degradate features.
|
||||
"""
|
||||
size = content_feat.size()
|
||||
style_mean, style_std = calc_mean_std(style_feat)
|
||||
content_mean, content_std = calc_mean_std(content_feat)
|
||||
normalized_feat = (content_feat - content_mean.expand(size)) / content_std.expand(
|
||||
size
|
||||
)
|
||||
return normalized_feat * style_std.expand(size) + style_mean.expand(size)
|
||||
|
||||
|
||||
class PositionEmbeddingSine(nn.Module):
|
||||
"""
|
||||
This is a more standard version of the position embedding, very similar to the one
|
||||
used by the Attention is all you need paper, generalized to work on images.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, num_pos_feats=64, temperature=10000, normalize=False, scale=None
|
||||
):
|
||||
super().__init__()
|
||||
self.num_pos_feats = num_pos_feats
|
||||
self.temperature = temperature
|
||||
self.normalize = normalize
|
||||
if scale is not None and normalize is False:
|
||||
raise ValueError("normalize should be True if scale is passed")
|
||||
if scale is None:
|
||||
scale = 2 * math.pi
|
||||
self.scale = scale
|
||||
|
||||
def forward(self, x, mask=None):
|
||||
if mask is None:
|
||||
mask = torch.zeros(
|
||||
(x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool
|
||||
)
|
||||
not_mask = ~mask # pylint: disable=invalid-unary-operand-type
|
||||
y_embed = not_mask.cumsum(1, dtype=torch.float32)
|
||||
x_embed = not_mask.cumsum(2, dtype=torch.float32)
|
||||
if self.normalize:
|
||||
eps = 1e-6
|
||||
y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
|
||||
x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
|
||||
|
||||
dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
|
||||
dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
|
||||
|
||||
pos_x = x_embed[:, :, :, None] / dim_t
|
||||
pos_y = y_embed[:, :, :, None] / dim_t
|
||||
pos_x = torch.stack(
|
||||
(pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
|
||||
).flatten(3)
|
||||
pos_y = torch.stack(
|
||||
(pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
|
||||
).flatten(3)
|
||||
pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
|
||||
return pos
|
||||
|
||||
|
||||
def _get_activation_fn(activation):
|
||||
"""Return an activation function given a string"""
|
||||
if activation == "relu":
|
||||
return F.relu
|
||||
if activation == "gelu":
|
||||
return F.gelu
|
||||
if activation == "glu":
|
||||
return F.glu
|
||||
raise RuntimeError(f"activation should be relu/gelu, not {activation}.")
|
||||
|
||||
|
||||
class TransformerSALayer(nn.Module):
|
||||
def __init__(
|
||||
self, embed_dim, nhead=8, dim_mlp=2048, dropout=0.0, activation="gelu"
|
||||
):
|
||||
super().__init__()
|
||||
self.self_attn = nn.MultiheadAttention(embed_dim, nhead, dropout=dropout)
|
||||
# Implementation of Feedforward model - MLP
|
||||
self.linear1 = nn.Linear(embed_dim, dim_mlp)
|
||||
self.dropout = nn.Dropout(dropout)
|
||||
self.linear2 = nn.Linear(dim_mlp, embed_dim)
|
||||
|
||||
self.norm1 = nn.LayerNorm(embed_dim)
|
||||
self.norm2 = nn.LayerNorm(embed_dim)
|
||||
self.dropout1 = nn.Dropout(dropout)
|
||||
self.dropout2 = nn.Dropout(dropout)
|
||||
|
||||
self.activation = _get_activation_fn(activation)
|
||||
|
||||
def with_pos_embed(self, tensor, pos: Optional[Tensor]):
|
||||
return tensor if pos is None else tensor + pos
|
||||
|
||||
def forward(
|
||||
self,
|
||||
tgt,
|
||||
tgt_mask: Optional[Tensor] = None,
|
||||
tgt_key_padding_mask: Optional[Tensor] = None,
|
||||
query_pos: Optional[Tensor] = None,
|
||||
):
|
||||
# self attention
|
||||
tgt2 = self.norm1(tgt)
|
||||
q = k = self.with_pos_embed(tgt2, query_pos)
|
||||
tgt2 = self.self_attn(
|
||||
q, k, value=tgt2, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask
|
||||
)[0]
|
||||
tgt = tgt + self.dropout1(tgt2)
|
||||
|
||||
# ffn
|
||||
tgt2 = self.norm2(tgt)
|
||||
tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
|
||||
tgt = tgt + self.dropout2(tgt2)
|
||||
return tgt
|
||||
|
||||
|
||||
def normalize(in_channels):
|
||||
return torch.nn.GroupNorm(
|
||||
num_groups=32, num_channels=in_channels, eps=1e-6, affine=True
|
||||
)
|
||||
|
||||
|
||||
@torch.jit.script # type: ignore
|
||||
def swish(x):
|
||||
return x * torch.sigmoid(x)
|
||||
|
||||
|
||||
class ResBlock(nn.Module):
|
||||
def __init__(self, in_channels, out_channels=None):
|
||||
super(ResBlock, self).__init__()
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = in_channels if out_channels is None else out_channels
|
||||
self.norm1 = normalize(in_channels)
|
||||
self.conv1 = nn.Conv2d(
|
||||
in_channels, out_channels, kernel_size=3, stride=1, padding=1 # type: ignore
|
||||
)
|
||||
self.norm2 = normalize(out_channels)
|
||||
self.conv2 = nn.Conv2d(
|
||||
out_channels, out_channels, kernel_size=3, stride=1, padding=1 # type: ignore
|
||||
)
|
||||
if self.in_channels != self.out_channels:
|
||||
self.conv_out = nn.Conv2d(
|
||||
in_channels, out_channels, kernel_size=1, stride=1, padding=0 # type: ignore
|
||||
)
|
||||
|
||||
def forward(self, x_in):
|
||||
x = x_in
|
||||
x = self.norm1(x)
|
||||
x = swish(x)
|
||||
x = self.conv1(x)
|
||||
x = self.norm2(x)
|
||||
x = swish(x)
|
||||
x = self.conv2(x)
|
||||
if self.in_channels != self.out_channels:
|
||||
x_in = self.conv_out(x_in)
|
||||
|
||||
return x + x_in
|
||||
|
||||
|
||||
class Fuse_sft_block(nn.Module):
|
||||
def __init__(self, in_ch, out_ch):
|
||||
super().__init__()
|
||||
self.encode_enc = ResBlock(2 * in_ch, out_ch)
|
||||
|
||||
self.scale = nn.Sequential(
|
||||
nn.Conv2d(in_ch, out_ch, kernel_size=3, padding=1),
|
||||
nn.LeakyReLU(0.2, True),
|
||||
nn.Conv2d(out_ch, out_ch, kernel_size=3, padding=1),
|
||||
)
|
||||
|
||||
self.shift = nn.Sequential(
|
||||
nn.Conv2d(in_ch, out_ch, kernel_size=3, padding=1),
|
||||
nn.LeakyReLU(0.2, True),
|
||||
nn.Conv2d(out_ch, out_ch, kernel_size=3, padding=1),
|
||||
)
|
||||
|
||||
def forward(self, enc_feat, dec_feat, w=1):
|
||||
enc_feat = self.encode_enc(torch.cat([enc_feat, dec_feat], dim=1))
|
||||
scale = self.scale(enc_feat)
|
||||
shift = self.shift(enc_feat)
|
||||
residual = w * (dec_feat * scale + shift)
|
||||
out = dec_feat + residual
|
||||
return out
|
||||
|
||||
|
||||
class CodeFormer(VQAutoEncoder):
|
||||
def __init__(self, state_dict):
|
||||
dim_embd = 512
|
||||
n_head = 8
|
||||
n_layers = 9
|
||||
codebook_size = 1024
|
||||
latent_size = 256
|
||||
connect_list = ["32", "64", "128", "256"]
|
||||
fix_modules = ["quantize", "generator"]
|
||||
|
||||
# This is just a guess as I only have one model to look at
|
||||
position_emb = state_dict["position_emb"]
|
||||
dim_embd = position_emb.shape[1]
|
||||
latent_size = position_emb.shape[0]
|
||||
|
||||
try:
|
||||
n_layers = len(
|
||||
set([x.split(".")[1] for x in state_dict.keys() if "ft_layers" in x])
|
||||
)
|
||||
except:
|
||||
pass
|
||||
|
||||
codebook_size = state_dict["quantize.embedding.weight"].shape[0]
|
||||
|
||||
# This is also just another guess
|
||||
n_head_exp = (
|
||||
state_dict["ft_layers.0.self_attn.in_proj_weight"].shape[0] // dim_embd
|
||||
)
|
||||
n_head = 2**n_head_exp
|
||||
|
||||
in_nc = state_dict["encoder.blocks.0.weight"].shape[1]
|
||||
|
||||
self.model_arch = "CodeFormer"
|
||||
self.sub_type = "Face SR"
|
||||
self.scale = 8
|
||||
self.in_nc = in_nc
|
||||
self.out_nc = in_nc
|
||||
|
||||
self.state = state_dict
|
||||
|
||||
self.supports_fp16 = False
|
||||
self.supports_bf16 = True
|
||||
self.min_size_restriction = 16
|
||||
|
||||
super(CodeFormer, self).__init__(
|
||||
512, 64, [1, 2, 2, 4, 4, 8], "nearest", 2, [16], codebook_size
|
||||
)
|
||||
|
||||
if fix_modules is not None:
|
||||
for module in fix_modules:
|
||||
for param in getattr(self, module).parameters():
|
||||
param.requires_grad = False
|
||||
|
||||
self.connect_list = connect_list
|
||||
self.n_layers = n_layers
|
||||
self.dim_embd = dim_embd
|
||||
self.dim_mlp = dim_embd * 2
|
||||
|
||||
self.position_emb = nn.Parameter(torch.zeros(latent_size, self.dim_embd)) # type: ignore
|
||||
self.feat_emb = nn.Linear(256, self.dim_embd)
|
||||
|
||||
# transformer
|
||||
self.ft_layers = nn.Sequential(
|
||||
*[
|
||||
TransformerSALayer(
|
||||
embed_dim=dim_embd, nhead=n_head, dim_mlp=self.dim_mlp, dropout=0.0
|
||||
)
|
||||
for _ in range(self.n_layers)
|
||||
]
|
||||
)
|
||||
|
||||
# logits_predict head
|
||||
self.idx_pred_layer = nn.Sequential(
|
||||
nn.LayerNorm(dim_embd), nn.Linear(dim_embd, codebook_size, bias=False)
|
||||
)
|
||||
|
||||
self.channels = {
|
||||
"16": 512,
|
||||
"32": 256,
|
||||
"64": 256,
|
||||
"128": 128,
|
||||
"256": 128,
|
||||
"512": 64,
|
||||
}
|
||||
|
||||
# after second residual block for > 16, before attn layer for ==16
|
||||
self.fuse_encoder_block = {
|
||||
"512": 2,
|
||||
"256": 5,
|
||||
"128": 8,
|
||||
"64": 11,
|
||||
"32": 14,
|
||||
"16": 18,
|
||||
}
|
||||
# after first residual block for > 16, before attn layer for ==16
|
||||
self.fuse_generator_block = {
|
||||
"16": 6,
|
||||
"32": 9,
|
||||
"64": 12,
|
||||
"128": 15,
|
||||
"256": 18,
|
||||
"512": 21,
|
||||
}
|
||||
|
||||
# fuse_convs_dict
|
||||
self.fuse_convs_dict = nn.ModuleDict()
|
||||
for f_size in self.connect_list:
|
||||
in_ch = self.channels[f_size]
|
||||
self.fuse_convs_dict[f_size] = Fuse_sft_block(in_ch, in_ch)
|
||||
|
||||
self.load_state_dict(state_dict)
|
||||
|
||||
def _init_weights(self, module):
|
||||
if isinstance(module, (nn.Linear, nn.Embedding)):
|
||||
module.weight.data.normal_(mean=0.0, std=0.02)
|
||||
if isinstance(module, nn.Linear) and module.bias is not None:
|
||||
module.bias.data.zero_()
|
||||
elif isinstance(module, nn.LayerNorm):
|
||||
module.bias.data.zero_()
|
||||
module.weight.data.fill_(1.0)
|
||||
|
||||
def forward(self, x, weight=0.5, **kwargs):
|
||||
detach_16 = True
|
||||
code_only = False
|
||||
adain = True
|
||||
# ################### Encoder #####################
|
||||
enc_feat_dict = {}
|
||||
out_list = [self.fuse_encoder_block[f_size] for f_size in self.connect_list]
|
||||
for i, block in enumerate(self.encoder.blocks):
|
||||
x = block(x)
|
||||
if i in out_list:
|
||||
enc_feat_dict[str(x.shape[-1])] = x.clone()
|
||||
|
||||
lq_feat = x
|
||||
# ################# Transformer ###################
|
||||
# quant_feat, codebook_loss, quant_stats = self.quantize(lq_feat)
|
||||
pos_emb = self.position_emb.unsqueeze(1).repeat(1, x.shape[0], 1)
|
||||
# BCHW -> BC(HW) -> (HW)BC
|
||||
feat_emb = self.feat_emb(lq_feat.flatten(2).permute(2, 0, 1))
|
||||
query_emb = feat_emb
|
||||
# Transformer encoder
|
||||
for layer in self.ft_layers:
|
||||
query_emb = layer(query_emb, query_pos=pos_emb)
|
||||
|
||||
# output logits
|
||||
logits = self.idx_pred_layer(query_emb) # (hw)bn
|
||||
logits = logits.permute(1, 0, 2) # (hw)bn -> b(hw)n
|
||||
|
||||
if code_only: # for training stage II
|
||||
# logits doesn't need softmax before cross_entropy loss
|
||||
return logits, lq_feat
|
||||
|
||||
# ################# Quantization ###################
|
||||
# if self.training:
|
||||
# quant_feat = torch.einsum('btn,nc->btc', [soft_one_hot, self.quantize.embedding.weight])
|
||||
# # b(hw)c -> bc(hw) -> bchw
|
||||
# quant_feat = quant_feat.permute(0,2,1).view(lq_feat.shape)
|
||||
# ------------
|
||||
soft_one_hot = F.softmax(logits, dim=2)
|
||||
_, top_idx = torch.topk(soft_one_hot, 1, dim=2)
|
||||
quant_feat = self.quantize.get_codebook_feat(
|
||||
top_idx, shape=[x.shape[0], 16, 16, 256] # type: ignore
|
||||
)
|
||||
# preserve gradients
|
||||
# quant_feat = lq_feat + (quant_feat - lq_feat).detach()
|
||||
|
||||
if detach_16:
|
||||
quant_feat = quant_feat.detach() # for training stage III
|
||||
if adain:
|
||||
quant_feat = adaptive_instance_normalization(quant_feat, lq_feat)
|
||||
|
||||
# ################## Generator ####################
|
||||
x = quant_feat
|
||||
fuse_list = [self.fuse_generator_block[f_size] for f_size in self.connect_list]
|
||||
|
||||
for i, block in enumerate(self.generator.blocks):
|
||||
x = block(x)
|
||||
if i in fuse_list: # fuse after i-th block
|
||||
f_size = str(x.shape[-1])
|
||||
if weight > 0:
|
||||
x = self.fuse_convs_dict[f_size](
|
||||
enc_feat_dict[f_size].detach(), x, weight
|
||||
)
|
||||
out = x
|
||||
# logits doesn't need softmax before cross_entropy loss
|
||||
# return out, logits, lq_feat
|
||||
return out, logits
|
81
comfy_extras/chainner_models/architecture/face/fused_act.py
Normal file
81
comfy_extras/chainner_models/architecture/face/fused_act.py
Normal file
@@ -0,0 +1,81 @@
|
||||
# pylint: skip-file
|
||||
# type: ignore
|
||||
# modify from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_act.py # noqa:E501
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
from torch.autograd import Function
|
||||
|
||||
fused_act_ext = None
|
||||
|
||||
|
||||
class FusedLeakyReLUFunctionBackward(Function):
|
||||
@staticmethod
|
||||
def forward(ctx, grad_output, out, negative_slope, scale):
|
||||
ctx.save_for_backward(out)
|
||||
ctx.negative_slope = negative_slope
|
||||
ctx.scale = scale
|
||||
|
||||
empty = grad_output.new_empty(0)
|
||||
|
||||
grad_input = fused_act_ext.fused_bias_act(
|
||||
grad_output, empty, out, 3, 1, negative_slope, scale
|
||||
)
|
||||
|
||||
dim = [0]
|
||||
|
||||
if grad_input.ndim > 2:
|
||||
dim += list(range(2, grad_input.ndim))
|
||||
|
||||
grad_bias = grad_input.sum(dim).detach()
|
||||
|
||||
return grad_input, grad_bias
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, gradgrad_input, gradgrad_bias):
|
||||
(out,) = ctx.saved_tensors
|
||||
gradgrad_out = fused_act_ext.fused_bias_act(
|
||||
gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale
|
||||
)
|
||||
|
||||
return gradgrad_out, None, None, None
|
||||
|
||||
|
||||
class FusedLeakyReLUFunction(Function):
|
||||
@staticmethod
|
||||
def forward(ctx, input, bias, negative_slope, scale):
|
||||
empty = input.new_empty(0)
|
||||
out = fused_act_ext.fused_bias_act(
|
||||
input, bias, empty, 3, 0, negative_slope, scale
|
||||
)
|
||||
ctx.save_for_backward(out)
|
||||
ctx.negative_slope = negative_slope
|
||||
ctx.scale = scale
|
||||
|
||||
return out
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, grad_output):
|
||||
(out,) = ctx.saved_tensors
|
||||
|
||||
grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(
|
||||
grad_output, out, ctx.negative_slope, ctx.scale
|
||||
)
|
||||
|
||||
return grad_input, grad_bias, None, None
|
||||
|
||||
|
||||
class FusedLeakyReLU(nn.Module):
|
||||
def __init__(self, channel, negative_slope=0.2, scale=2**0.5):
|
||||
super().__init__()
|
||||
|
||||
self.bias = nn.Parameter(torch.zeros(channel))
|
||||
self.negative_slope = negative_slope
|
||||
self.scale = scale
|
||||
|
||||
def forward(self, input):
|
||||
return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale)
|
||||
|
||||
|
||||
def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2**0.5):
|
||||
return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale)
|
@@ -0,0 +1,389 @@
|
||||
# pylint: skip-file
|
||||
# type: ignore
|
||||
import math
|
||||
import random
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
|
||||
from .gfpganv1_arch import ResUpBlock
|
||||
from .stylegan2_bilinear_arch import (
|
||||
ConvLayer,
|
||||
EqualConv2d,
|
||||
EqualLinear,
|
||||
ResBlock,
|
||||
ScaledLeakyReLU,
|
||||
StyleGAN2GeneratorBilinear,
|
||||
)
|
||||
|
||||
|
||||
class StyleGAN2GeneratorBilinearSFT(StyleGAN2GeneratorBilinear):
|
||||
"""StyleGAN2 Generator with SFT modulation (Spatial Feature Transform).
|
||||
It is the bilinear version. It does not use the complicated UpFirDnSmooth function that is not friendly for
|
||||
deployment. It can be easily converted to the clean version: StyleGAN2GeneratorCSFT.
|
||||
Args:
|
||||
out_size (int): The spatial size of outputs.
|
||||
num_style_feat (int): Channel number of style features. Default: 512.
|
||||
num_mlp (int): Layer number of MLP style layers. Default: 8.
|
||||
channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.
|
||||
lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01.
|
||||
narrow (float): The narrow ratio for channels. Default: 1.
|
||||
sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
out_size,
|
||||
num_style_feat=512,
|
||||
num_mlp=8,
|
||||
channel_multiplier=2,
|
||||
lr_mlp=0.01,
|
||||
narrow=1,
|
||||
sft_half=False,
|
||||
):
|
||||
super(StyleGAN2GeneratorBilinearSFT, self).__init__(
|
||||
out_size,
|
||||
num_style_feat=num_style_feat,
|
||||
num_mlp=num_mlp,
|
||||
channel_multiplier=channel_multiplier,
|
||||
lr_mlp=lr_mlp,
|
||||
narrow=narrow,
|
||||
)
|
||||
self.sft_half = sft_half
|
||||
|
||||
def forward(
|
||||
self,
|
||||
styles,
|
||||
conditions,
|
||||
input_is_latent=False,
|
||||
noise=None,
|
||||
randomize_noise=True,
|
||||
truncation=1,
|
||||
truncation_latent=None,
|
||||
inject_index=None,
|
||||
return_latents=False,
|
||||
):
|
||||
"""Forward function for StyleGAN2GeneratorBilinearSFT.
|
||||
Args:
|
||||
styles (list[Tensor]): Sample codes of styles.
|
||||
conditions (list[Tensor]): SFT conditions to generators.
|
||||
input_is_latent (bool): Whether input is latent style. Default: False.
|
||||
noise (Tensor | None): Input noise or None. Default: None.
|
||||
randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.
|
||||
truncation (float): The truncation ratio. Default: 1.
|
||||
truncation_latent (Tensor | None): The truncation latent tensor. Default: None.
|
||||
inject_index (int | None): The injection index for mixing noise. Default: None.
|
||||
return_latents (bool): Whether to return style latents. Default: False.
|
||||
"""
|
||||
# style codes -> latents with Style MLP layer
|
||||
if not input_is_latent:
|
||||
styles = [self.style_mlp(s) for s in styles]
|
||||
# noises
|
||||
if noise is None:
|
||||
if randomize_noise:
|
||||
noise = [None] * self.num_layers # for each style conv layer
|
||||
else: # use the stored noise
|
||||
noise = [
|
||||
getattr(self.noises, f"noise{i}") for i in range(self.num_layers)
|
||||
]
|
||||
# style truncation
|
||||
if truncation < 1:
|
||||
style_truncation = []
|
||||
for style in styles:
|
||||
style_truncation.append(
|
||||
truncation_latent + truncation * (style - truncation_latent)
|
||||
)
|
||||
styles = style_truncation
|
||||
# get style latents with injection
|
||||
if len(styles) == 1:
|
||||
inject_index = self.num_latent
|
||||
|
||||
if styles[0].ndim < 3:
|
||||
# repeat latent code for all the layers
|
||||
latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
|
||||
else: # used for encoder with different latent code for each layer
|
||||
latent = styles[0]
|
||||
elif len(styles) == 2: # mixing noises
|
||||
if inject_index is None:
|
||||
inject_index = random.randint(1, self.num_latent - 1)
|
||||
latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
|
||||
latent2 = (
|
||||
styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)
|
||||
)
|
||||
latent = torch.cat([latent1, latent2], 1)
|
||||
|
||||
# main generation
|
||||
out = self.constant_input(latent.shape[0])
|
||||
out = self.style_conv1(out, latent[:, 0], noise=noise[0])
|
||||
skip = self.to_rgb1(out, latent[:, 1])
|
||||
|
||||
i = 1
|
||||
for conv1, conv2, noise1, noise2, to_rgb in zip(
|
||||
self.style_convs[::2],
|
||||
self.style_convs[1::2],
|
||||
noise[1::2],
|
||||
noise[2::2],
|
||||
self.to_rgbs,
|
||||
):
|
||||
out = conv1(out, latent[:, i], noise=noise1)
|
||||
|
||||
# the conditions may have fewer levels
|
||||
if i < len(conditions):
|
||||
# SFT part to combine the conditions
|
||||
if self.sft_half: # only apply SFT to half of the channels
|
||||
out_same, out_sft = torch.split(out, int(out.size(1) // 2), dim=1)
|
||||
out_sft = out_sft * conditions[i - 1] + conditions[i]
|
||||
out = torch.cat([out_same, out_sft], dim=1)
|
||||
else: # apply SFT to all the channels
|
||||
out = out * conditions[i - 1] + conditions[i]
|
||||
|
||||
out = conv2(out, latent[:, i + 1], noise=noise2)
|
||||
skip = to_rgb(out, latent[:, i + 2], skip) # feature back to the rgb space
|
||||
i += 2
|
||||
|
||||
image = skip
|
||||
|
||||
if return_latents:
|
||||
return image, latent
|
||||
else:
|
||||
return image, None
|
||||
|
||||
|
||||
class GFPGANBilinear(nn.Module):
|
||||
"""The GFPGAN architecture: Unet + StyleGAN2 decoder with SFT.
|
||||
It is the bilinear version and it does not use the complicated UpFirDnSmooth function that is not friendly for
|
||||
deployment. It can be easily converted to the clean version: GFPGANv1Clean.
|
||||
Ref: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior.
|
||||
Args:
|
||||
out_size (int): The spatial size of outputs.
|
||||
num_style_feat (int): Channel number of style features. Default: 512.
|
||||
channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.
|
||||
decoder_load_path (str): The path to the pre-trained decoder model (usually, the StyleGAN2). Default: None.
|
||||
fix_decoder (bool): Whether to fix the decoder. Default: True.
|
||||
num_mlp (int): Layer number of MLP style layers. Default: 8.
|
||||
lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01.
|
||||
input_is_latent (bool): Whether input is latent style. Default: False.
|
||||
different_w (bool): Whether to use different latent w for different layers. Default: False.
|
||||
narrow (float): The narrow ratio for channels. Default: 1.
|
||||
sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
out_size,
|
||||
num_style_feat=512,
|
||||
channel_multiplier=1,
|
||||
decoder_load_path=None,
|
||||
fix_decoder=True,
|
||||
# for stylegan decoder
|
||||
num_mlp=8,
|
||||
lr_mlp=0.01,
|
||||
input_is_latent=False,
|
||||
different_w=False,
|
||||
narrow=1,
|
||||
sft_half=False,
|
||||
):
|
||||
super(GFPGANBilinear, self).__init__()
|
||||
self.input_is_latent = input_is_latent
|
||||
self.different_w = different_w
|
||||
self.num_style_feat = num_style_feat
|
||||
self.min_size_restriction = 512
|
||||
|
||||
unet_narrow = narrow * 0.5 # by default, use a half of input channels
|
||||
channels = {
|
||||
"4": int(512 * unet_narrow),
|
||||
"8": int(512 * unet_narrow),
|
||||
"16": int(512 * unet_narrow),
|
||||
"32": int(512 * unet_narrow),
|
||||
"64": int(256 * channel_multiplier * unet_narrow),
|
||||
"128": int(128 * channel_multiplier * unet_narrow),
|
||||
"256": int(64 * channel_multiplier * unet_narrow),
|
||||
"512": int(32 * channel_multiplier * unet_narrow),
|
||||
"1024": int(16 * channel_multiplier * unet_narrow),
|
||||
}
|
||||
|
||||
self.log_size = int(math.log(out_size, 2))
|
||||
first_out_size = 2 ** (int(math.log(out_size, 2)))
|
||||
|
||||
self.conv_body_first = ConvLayer(
|
||||
3, channels[f"{first_out_size}"], 1, bias=True, activate=True
|
||||
)
|
||||
|
||||
# downsample
|
||||
in_channels = channels[f"{first_out_size}"]
|
||||
self.conv_body_down = nn.ModuleList()
|
||||
for i in range(self.log_size, 2, -1):
|
||||
out_channels = channels[f"{2**(i - 1)}"]
|
||||
self.conv_body_down.append(ResBlock(in_channels, out_channels))
|
||||
in_channels = out_channels
|
||||
|
||||
self.final_conv = ConvLayer(
|
||||
in_channels, channels["4"], 3, bias=True, activate=True
|
||||
)
|
||||
|
||||
# upsample
|
||||
in_channels = channels["4"]
|
||||
self.conv_body_up = nn.ModuleList()
|
||||
for i in range(3, self.log_size + 1):
|
||||
out_channels = channels[f"{2**i}"]
|
||||
self.conv_body_up.append(ResUpBlock(in_channels, out_channels))
|
||||
in_channels = out_channels
|
||||
|
||||
# to RGB
|
||||
self.toRGB = nn.ModuleList()
|
||||
for i in range(3, self.log_size + 1):
|
||||
self.toRGB.append(
|
||||
EqualConv2d(
|
||||
channels[f"{2**i}"],
|
||||
3,
|
||||
1,
|
||||
stride=1,
|
||||
padding=0,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
)
|
||||
)
|
||||
|
||||
if different_w:
|
||||
linear_out_channel = (int(math.log(out_size, 2)) * 2 - 2) * num_style_feat
|
||||
else:
|
||||
linear_out_channel = num_style_feat
|
||||
|
||||
self.final_linear = EqualLinear(
|
||||
channels["4"] * 4 * 4,
|
||||
linear_out_channel,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
lr_mul=1,
|
||||
activation=None,
|
||||
)
|
||||
|
||||
# the decoder: stylegan2 generator with SFT modulations
|
||||
self.stylegan_decoder = StyleGAN2GeneratorBilinearSFT(
|
||||
out_size=out_size,
|
||||
num_style_feat=num_style_feat,
|
||||
num_mlp=num_mlp,
|
||||
channel_multiplier=channel_multiplier,
|
||||
lr_mlp=lr_mlp,
|
||||
narrow=narrow,
|
||||
sft_half=sft_half,
|
||||
)
|
||||
|
||||
# load pre-trained stylegan2 model if necessary
|
||||
if decoder_load_path:
|
||||
self.stylegan_decoder.load_state_dict(
|
||||
torch.load(
|
||||
decoder_load_path, map_location=lambda storage, loc: storage
|
||||
)["params_ema"]
|
||||
)
|
||||
# fix decoder without updating params
|
||||
if fix_decoder:
|
||||
for _, param in self.stylegan_decoder.named_parameters():
|
||||
param.requires_grad = False
|
||||
|
||||
# for SFT modulations (scale and shift)
|
||||
self.condition_scale = nn.ModuleList()
|
||||
self.condition_shift = nn.ModuleList()
|
||||
for i in range(3, self.log_size + 1):
|
||||
out_channels = channels[f"{2**i}"]
|
||||
if sft_half:
|
||||
sft_out_channels = out_channels
|
||||
else:
|
||||
sft_out_channels = out_channels * 2
|
||||
self.condition_scale.append(
|
||||
nn.Sequential(
|
||||
EqualConv2d(
|
||||
out_channels,
|
||||
out_channels,
|
||||
3,
|
||||
stride=1,
|
||||
padding=1,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
),
|
||||
ScaledLeakyReLU(0.2),
|
||||
EqualConv2d(
|
||||
out_channels,
|
||||
sft_out_channels,
|
||||
3,
|
||||
stride=1,
|
||||
padding=1,
|
||||
bias=True,
|
||||
bias_init_val=1,
|
||||
),
|
||||
)
|
||||
)
|
||||
self.condition_shift.append(
|
||||
nn.Sequential(
|
||||
EqualConv2d(
|
||||
out_channels,
|
||||
out_channels,
|
||||
3,
|
||||
stride=1,
|
||||
padding=1,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
),
|
||||
ScaledLeakyReLU(0.2),
|
||||
EqualConv2d(
|
||||
out_channels,
|
||||
sft_out_channels,
|
||||
3,
|
||||
stride=1,
|
||||
padding=1,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
def forward(self, x, return_latents=False, return_rgb=True, randomize_noise=True):
|
||||
"""Forward function for GFPGANBilinear.
|
||||
Args:
|
||||
x (Tensor): Input images.
|
||||
return_latents (bool): Whether to return style latents. Default: False.
|
||||
return_rgb (bool): Whether return intermediate rgb images. Default: True.
|
||||
randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.
|
||||
"""
|
||||
conditions = []
|
||||
unet_skips = []
|
||||
out_rgbs = []
|
||||
|
||||
# encoder
|
||||
feat = self.conv_body_first(x)
|
||||
for i in range(self.log_size - 2):
|
||||
feat = self.conv_body_down[i](feat)
|
||||
unet_skips.insert(0, feat)
|
||||
|
||||
feat = self.final_conv(feat)
|
||||
|
||||
# style code
|
||||
style_code = self.final_linear(feat.view(feat.size(0), -1))
|
||||
if self.different_w:
|
||||
style_code = style_code.view(style_code.size(0), -1, self.num_style_feat)
|
||||
|
||||
# decode
|
||||
for i in range(self.log_size - 2):
|
||||
# add unet skip
|
||||
feat = feat + unet_skips[i]
|
||||
# ResUpLayer
|
||||
feat = self.conv_body_up[i](feat)
|
||||
# generate scale and shift for SFT layers
|
||||
scale = self.condition_scale[i](feat)
|
||||
conditions.append(scale.clone())
|
||||
shift = self.condition_shift[i](feat)
|
||||
conditions.append(shift.clone())
|
||||
# generate rgb images
|
||||
if return_rgb:
|
||||
out_rgbs.append(self.toRGB[i](feat))
|
||||
|
||||
# decoder
|
||||
image, _ = self.stylegan_decoder(
|
||||
[style_code],
|
||||
conditions,
|
||||
return_latents=return_latents,
|
||||
input_is_latent=self.input_is_latent,
|
||||
randomize_noise=randomize_noise,
|
||||
)
|
||||
|
||||
return image, out_rgbs
|
566
comfy_extras/chainner_models/architecture/face/gfpganv1_arch.py
Normal file
566
comfy_extras/chainner_models/architecture/face/gfpganv1_arch.py
Normal file
@@ -0,0 +1,566 @@
|
||||
# pylint: skip-file
|
||||
# type: ignore
|
||||
import math
|
||||
import random
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
from torch.nn import functional as F
|
||||
|
||||
from .fused_act import FusedLeakyReLU
|
||||
from .stylegan2_arch import (
|
||||
ConvLayer,
|
||||
EqualConv2d,
|
||||
EqualLinear,
|
||||
ResBlock,
|
||||
ScaledLeakyReLU,
|
||||
StyleGAN2Generator,
|
||||
)
|
||||
|
||||
|
||||
class StyleGAN2GeneratorSFT(StyleGAN2Generator):
|
||||
"""StyleGAN2 Generator with SFT modulation (Spatial Feature Transform).
|
||||
Args:
|
||||
out_size (int): The spatial size of outputs.
|
||||
num_style_feat (int): Channel number of style features. Default: 512.
|
||||
num_mlp (int): Layer number of MLP style layers. Default: 8.
|
||||
channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.
|
||||
resample_kernel (list[int]): A list indicating the 1D resample kernel magnitude. A cross production will be
|
||||
applied to extent 1D resample kernel to 2D resample kernel. Default: (1, 3, 3, 1).
|
||||
lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01.
|
||||
narrow (float): The narrow ratio for channels. Default: 1.
|
||||
sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
out_size,
|
||||
num_style_feat=512,
|
||||
num_mlp=8,
|
||||
channel_multiplier=2,
|
||||
resample_kernel=(1, 3, 3, 1),
|
||||
lr_mlp=0.01,
|
||||
narrow=1,
|
||||
sft_half=False,
|
||||
):
|
||||
super(StyleGAN2GeneratorSFT, self).__init__(
|
||||
out_size,
|
||||
num_style_feat=num_style_feat,
|
||||
num_mlp=num_mlp,
|
||||
channel_multiplier=channel_multiplier,
|
||||
resample_kernel=resample_kernel,
|
||||
lr_mlp=lr_mlp,
|
||||
narrow=narrow,
|
||||
)
|
||||
self.sft_half = sft_half
|
||||
|
||||
def forward(
|
||||
self,
|
||||
styles,
|
||||
conditions,
|
||||
input_is_latent=False,
|
||||
noise=None,
|
||||
randomize_noise=True,
|
||||
truncation=1,
|
||||
truncation_latent=None,
|
||||
inject_index=None,
|
||||
return_latents=False,
|
||||
):
|
||||
"""Forward function for StyleGAN2GeneratorSFT.
|
||||
Args:
|
||||
styles (list[Tensor]): Sample codes of styles.
|
||||
conditions (list[Tensor]): SFT conditions to generators.
|
||||
input_is_latent (bool): Whether input is latent style. Default: False.
|
||||
noise (Tensor | None): Input noise or None. Default: None.
|
||||
randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.
|
||||
truncation (float): The truncation ratio. Default: 1.
|
||||
truncation_latent (Tensor | None): The truncation latent tensor. Default: None.
|
||||
inject_index (int | None): The injection index for mixing noise. Default: None.
|
||||
return_latents (bool): Whether to return style latents. Default: False.
|
||||
"""
|
||||
# style codes -> latents with Style MLP layer
|
||||
if not input_is_latent:
|
||||
styles = [self.style_mlp(s) for s in styles]
|
||||
# noises
|
||||
if noise is None:
|
||||
if randomize_noise:
|
||||
noise = [None] * self.num_layers # for each style conv layer
|
||||
else: # use the stored noise
|
||||
noise = [
|
||||
getattr(self.noises, f"noise{i}") for i in range(self.num_layers)
|
||||
]
|
||||
# style truncation
|
||||
if truncation < 1:
|
||||
style_truncation = []
|
||||
for style in styles:
|
||||
style_truncation.append(
|
||||
truncation_latent + truncation * (style - truncation_latent)
|
||||
)
|
||||
styles = style_truncation
|
||||
# get style latents with injection
|
||||
if len(styles) == 1:
|
||||
inject_index = self.num_latent
|
||||
|
||||
if styles[0].ndim < 3:
|
||||
# repeat latent code for all the layers
|
||||
latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
|
||||
else: # used for encoder with different latent code for each layer
|
||||
latent = styles[0]
|
||||
elif len(styles) == 2: # mixing noises
|
||||
if inject_index is None:
|
||||
inject_index = random.randint(1, self.num_latent - 1)
|
||||
latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
|
||||
latent2 = (
|
||||
styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)
|
||||
)
|
||||
latent = torch.cat([latent1, latent2], 1)
|
||||
|
||||
# main generation
|
||||
out = self.constant_input(latent.shape[0])
|
||||
out = self.style_conv1(out, latent[:, 0], noise=noise[0])
|
||||
skip = self.to_rgb1(out, latent[:, 1])
|
||||
|
||||
i = 1
|
||||
for conv1, conv2, noise1, noise2, to_rgb in zip(
|
||||
self.style_convs[::2],
|
||||
self.style_convs[1::2],
|
||||
noise[1::2],
|
||||
noise[2::2],
|
||||
self.to_rgbs,
|
||||
):
|
||||
out = conv1(out, latent[:, i], noise=noise1)
|
||||
|
||||
# the conditions may have fewer levels
|
||||
if i < len(conditions):
|
||||
# SFT part to combine the conditions
|
||||
if self.sft_half: # only apply SFT to half of the channels
|
||||
out_same, out_sft = torch.split(out, int(out.size(1) // 2), dim=1)
|
||||
out_sft = out_sft * conditions[i - 1] + conditions[i]
|
||||
out = torch.cat([out_same, out_sft], dim=1)
|
||||
else: # apply SFT to all the channels
|
||||
out = out * conditions[i - 1] + conditions[i]
|
||||
|
||||
out = conv2(out, latent[:, i + 1], noise=noise2)
|
||||
skip = to_rgb(out, latent[:, i + 2], skip) # feature back to the rgb space
|
||||
i += 2
|
||||
|
||||
image = skip
|
||||
|
||||
if return_latents:
|
||||
return image, latent
|
||||
else:
|
||||
return image, None
|
||||
|
||||
|
||||
class ConvUpLayer(nn.Module):
|
||||
"""Convolutional upsampling layer. It uses bilinear upsampler + Conv.
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
kernel_size (int): Size of the convolving kernel.
|
||||
stride (int): Stride of the convolution. Default: 1
|
||||
padding (int): Zero-padding added to both sides of the input. Default: 0.
|
||||
bias (bool): If ``True``, adds a learnable bias to the output. Default: ``True``.
|
||||
bias_init_val (float): Bias initialized value. Default: 0.
|
||||
activate (bool): Whether use activateion. Default: True.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
stride=1,
|
||||
padding=0,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
activate=True,
|
||||
):
|
||||
super(ConvUpLayer, self).__init__()
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = out_channels
|
||||
self.kernel_size = kernel_size
|
||||
self.stride = stride
|
||||
self.padding = padding
|
||||
# self.scale is used to scale the convolution weights, which is related to the common initializations.
|
||||
self.scale = 1 / math.sqrt(in_channels * kernel_size**2)
|
||||
|
||||
self.weight = nn.Parameter(
|
||||
torch.randn(out_channels, in_channels, kernel_size, kernel_size)
|
||||
)
|
||||
|
||||
if bias and not activate:
|
||||
self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val))
|
||||
else:
|
||||
self.register_parameter("bias", None)
|
||||
|
||||
# activation
|
||||
if activate:
|
||||
if bias:
|
||||
self.activation = FusedLeakyReLU(out_channels)
|
||||
else:
|
||||
self.activation = ScaledLeakyReLU(0.2)
|
||||
else:
|
||||
self.activation = None
|
||||
|
||||
def forward(self, x):
|
||||
# bilinear upsample
|
||||
out = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=False)
|
||||
# conv
|
||||
out = F.conv2d(
|
||||
out,
|
||||
self.weight * self.scale,
|
||||
bias=self.bias,
|
||||
stride=self.stride,
|
||||
padding=self.padding,
|
||||
)
|
||||
# activation
|
||||
if self.activation is not None:
|
||||
out = self.activation(out)
|
||||
return out
|
||||
|
||||
|
||||
class ResUpBlock(nn.Module):
|
||||
"""Residual block with upsampling.
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
"""
|
||||
|
||||
def __init__(self, in_channels, out_channels):
|
||||
super(ResUpBlock, self).__init__()
|
||||
|
||||
self.conv1 = ConvLayer(in_channels, in_channels, 3, bias=True, activate=True)
|
||||
self.conv2 = ConvUpLayer(
|
||||
in_channels, out_channels, 3, stride=1, padding=1, bias=True, activate=True
|
||||
)
|
||||
self.skip = ConvUpLayer(
|
||||
in_channels, out_channels, 1, bias=False, activate=False
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
out = self.conv1(x)
|
||||
out = self.conv2(out)
|
||||
skip = self.skip(x)
|
||||
out = (out + skip) / math.sqrt(2)
|
||||
return out
|
||||
|
||||
|
||||
class GFPGANv1(nn.Module):
|
||||
"""The GFPGAN architecture: Unet + StyleGAN2 decoder with SFT.
|
||||
Ref: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior.
|
||||
Args:
|
||||
out_size (int): The spatial size of outputs.
|
||||
num_style_feat (int): Channel number of style features. Default: 512.
|
||||
channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.
|
||||
resample_kernel (list[int]): A list indicating the 1D resample kernel magnitude. A cross production will be
|
||||
applied to extent 1D resample kernel to 2D resample kernel. Default: (1, 3, 3, 1).
|
||||
decoder_load_path (str): The path to the pre-trained decoder model (usually, the StyleGAN2). Default: None.
|
||||
fix_decoder (bool): Whether to fix the decoder. Default: True.
|
||||
num_mlp (int): Layer number of MLP style layers. Default: 8.
|
||||
lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01.
|
||||
input_is_latent (bool): Whether input is latent style. Default: False.
|
||||
different_w (bool): Whether to use different latent w for different layers. Default: False.
|
||||
narrow (float): The narrow ratio for channels. Default: 1.
|
||||
sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
out_size,
|
||||
num_style_feat=512,
|
||||
channel_multiplier=1,
|
||||
resample_kernel=(1, 3, 3, 1),
|
||||
decoder_load_path=None,
|
||||
fix_decoder=True,
|
||||
# for stylegan decoder
|
||||
num_mlp=8,
|
||||
lr_mlp=0.01,
|
||||
input_is_latent=False,
|
||||
different_w=False,
|
||||
narrow=1,
|
||||
sft_half=False,
|
||||
):
|
||||
super(GFPGANv1, self).__init__()
|
||||
self.input_is_latent = input_is_latent
|
||||
self.different_w = different_w
|
||||
self.num_style_feat = num_style_feat
|
||||
|
||||
unet_narrow = narrow * 0.5 # by default, use a half of input channels
|
||||
channels = {
|
||||
"4": int(512 * unet_narrow),
|
||||
"8": int(512 * unet_narrow),
|
||||
"16": int(512 * unet_narrow),
|
||||
"32": int(512 * unet_narrow),
|
||||
"64": int(256 * channel_multiplier * unet_narrow),
|
||||
"128": int(128 * channel_multiplier * unet_narrow),
|
||||
"256": int(64 * channel_multiplier * unet_narrow),
|
||||
"512": int(32 * channel_multiplier * unet_narrow),
|
||||
"1024": int(16 * channel_multiplier * unet_narrow),
|
||||
}
|
||||
|
||||
self.log_size = int(math.log(out_size, 2))
|
||||
first_out_size = 2 ** (int(math.log(out_size, 2)))
|
||||
|
||||
self.conv_body_first = ConvLayer(
|
||||
3, channels[f"{first_out_size}"], 1, bias=True, activate=True
|
||||
)
|
||||
|
||||
# downsample
|
||||
in_channels = channels[f"{first_out_size}"]
|
||||
self.conv_body_down = nn.ModuleList()
|
||||
for i in range(self.log_size, 2, -1):
|
||||
out_channels = channels[f"{2**(i - 1)}"]
|
||||
self.conv_body_down.append(
|
||||
ResBlock(in_channels, out_channels, resample_kernel)
|
||||
)
|
||||
in_channels = out_channels
|
||||
|
||||
self.final_conv = ConvLayer(
|
||||
in_channels, channels["4"], 3, bias=True, activate=True
|
||||
)
|
||||
|
||||
# upsample
|
||||
in_channels = channels["4"]
|
||||
self.conv_body_up = nn.ModuleList()
|
||||
for i in range(3, self.log_size + 1):
|
||||
out_channels = channels[f"{2**i}"]
|
||||
self.conv_body_up.append(ResUpBlock(in_channels, out_channels))
|
||||
in_channels = out_channels
|
||||
|
||||
# to RGB
|
||||
self.toRGB = nn.ModuleList()
|
||||
for i in range(3, self.log_size + 1):
|
||||
self.toRGB.append(
|
||||
EqualConv2d(
|
||||
channels[f"{2**i}"],
|
||||
3,
|
||||
1,
|
||||
stride=1,
|
||||
padding=0,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
)
|
||||
)
|
||||
|
||||
if different_w:
|
||||
linear_out_channel = (int(math.log(out_size, 2)) * 2 - 2) * num_style_feat
|
||||
else:
|
||||
linear_out_channel = num_style_feat
|
||||
|
||||
self.final_linear = EqualLinear(
|
||||
channels["4"] * 4 * 4,
|
||||
linear_out_channel,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
lr_mul=1,
|
||||
activation=None,
|
||||
)
|
||||
|
||||
# the decoder: stylegan2 generator with SFT modulations
|
||||
self.stylegan_decoder = StyleGAN2GeneratorSFT(
|
||||
out_size=out_size,
|
||||
num_style_feat=num_style_feat,
|
||||
num_mlp=num_mlp,
|
||||
channel_multiplier=channel_multiplier,
|
||||
resample_kernel=resample_kernel,
|
||||
lr_mlp=lr_mlp,
|
||||
narrow=narrow,
|
||||
sft_half=sft_half,
|
||||
)
|
||||
|
||||
# load pre-trained stylegan2 model if necessary
|
||||
if decoder_load_path:
|
||||
self.stylegan_decoder.load_state_dict(
|
||||
torch.load(
|
||||
decoder_load_path, map_location=lambda storage, loc: storage
|
||||
)["params_ema"]
|
||||
)
|
||||
# fix decoder without updating params
|
||||
if fix_decoder:
|
||||
for _, param in self.stylegan_decoder.named_parameters():
|
||||
param.requires_grad = False
|
||||
|
||||
# for SFT modulations (scale and shift)
|
||||
self.condition_scale = nn.ModuleList()
|
||||
self.condition_shift = nn.ModuleList()
|
||||
for i in range(3, self.log_size + 1):
|
||||
out_channels = channels[f"{2**i}"]
|
||||
if sft_half:
|
||||
sft_out_channels = out_channels
|
||||
else:
|
||||
sft_out_channels = out_channels * 2
|
||||
self.condition_scale.append(
|
||||
nn.Sequential(
|
||||
EqualConv2d(
|
||||
out_channels,
|
||||
out_channels,
|
||||
3,
|
||||
stride=1,
|
||||
padding=1,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
),
|
||||
ScaledLeakyReLU(0.2),
|
||||
EqualConv2d(
|
||||
out_channels,
|
||||
sft_out_channels,
|
||||
3,
|
||||
stride=1,
|
||||
padding=1,
|
||||
bias=True,
|
||||
bias_init_val=1,
|
||||
),
|
||||
)
|
||||
)
|
||||
self.condition_shift.append(
|
||||
nn.Sequential(
|
||||
EqualConv2d(
|
||||
out_channels,
|
||||
out_channels,
|
||||
3,
|
||||
stride=1,
|
||||
padding=1,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
),
|
||||
ScaledLeakyReLU(0.2),
|
||||
EqualConv2d(
|
||||
out_channels,
|
||||
sft_out_channels,
|
||||
3,
|
||||
stride=1,
|
||||
padding=1,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
def forward(
|
||||
self, x, return_latents=False, return_rgb=True, randomize_noise=True, **kwargs
|
||||
):
|
||||
"""Forward function for GFPGANv1.
|
||||
Args:
|
||||
x (Tensor): Input images.
|
||||
return_latents (bool): Whether to return style latents. Default: False.
|
||||
return_rgb (bool): Whether return intermediate rgb images. Default: True.
|
||||
randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.
|
||||
"""
|
||||
conditions = []
|
||||
unet_skips = []
|
||||
out_rgbs = []
|
||||
|
||||
# encoder
|
||||
feat = self.conv_body_first(x)
|
||||
for i in range(self.log_size - 2):
|
||||
feat = self.conv_body_down[i](feat)
|
||||
unet_skips.insert(0, feat)
|
||||
|
||||
feat = self.final_conv(feat)
|
||||
|
||||
# style code
|
||||
style_code = self.final_linear(feat.view(feat.size(0), -1))
|
||||
if self.different_w:
|
||||
style_code = style_code.view(style_code.size(0), -1, self.num_style_feat)
|
||||
|
||||
# decode
|
||||
for i in range(self.log_size - 2):
|
||||
# add unet skip
|
||||
feat = feat + unet_skips[i]
|
||||
# ResUpLayer
|
||||
feat = self.conv_body_up[i](feat)
|
||||
# generate scale and shift for SFT layers
|
||||
scale = self.condition_scale[i](feat)
|
||||
conditions.append(scale.clone())
|
||||
shift = self.condition_shift[i](feat)
|
||||
conditions.append(shift.clone())
|
||||
# generate rgb images
|
||||
if return_rgb:
|
||||
out_rgbs.append(self.toRGB[i](feat))
|
||||
|
||||
# decoder
|
||||
image, _ = self.stylegan_decoder(
|
||||
[style_code],
|
||||
conditions,
|
||||
return_latents=return_latents,
|
||||
input_is_latent=self.input_is_latent,
|
||||
randomize_noise=randomize_noise,
|
||||
)
|
||||
|
||||
return image, out_rgbs
|
||||
|
||||
|
||||
class FacialComponentDiscriminator(nn.Module):
|
||||
"""Facial component (eyes, mouth, noise) discriminator used in GFPGAN."""
|
||||
|
||||
def __init__(self):
|
||||
super(FacialComponentDiscriminator, self).__init__()
|
||||
# It now uses a VGG-style architectrue with fixed model size
|
||||
self.conv1 = ConvLayer(
|
||||
3,
|
||||
64,
|
||||
3,
|
||||
downsample=False,
|
||||
resample_kernel=(1, 3, 3, 1),
|
||||
bias=True,
|
||||
activate=True,
|
||||
)
|
||||
self.conv2 = ConvLayer(
|
||||
64,
|
||||
128,
|
||||
3,
|
||||
downsample=True,
|
||||
resample_kernel=(1, 3, 3, 1),
|
||||
bias=True,
|
||||
activate=True,
|
||||
)
|
||||
self.conv3 = ConvLayer(
|
||||
128,
|
||||
128,
|
||||
3,
|
||||
downsample=False,
|
||||
resample_kernel=(1, 3, 3, 1),
|
||||
bias=True,
|
||||
activate=True,
|
||||
)
|
||||
self.conv4 = ConvLayer(
|
||||
128,
|
||||
256,
|
||||
3,
|
||||
downsample=True,
|
||||
resample_kernel=(1, 3, 3, 1),
|
||||
bias=True,
|
||||
activate=True,
|
||||
)
|
||||
self.conv5 = ConvLayer(
|
||||
256,
|
||||
256,
|
||||
3,
|
||||
downsample=False,
|
||||
resample_kernel=(1, 3, 3, 1),
|
||||
bias=True,
|
||||
activate=True,
|
||||
)
|
||||
self.final_conv = ConvLayer(256, 1, 3, bias=True, activate=False)
|
||||
|
||||
def forward(self, x, return_feats=False, **kwargs):
|
||||
"""Forward function for FacialComponentDiscriminator.
|
||||
Args:
|
||||
x (Tensor): Input images.
|
||||
return_feats (bool): Whether to return intermediate features. Default: False.
|
||||
"""
|
||||
feat = self.conv1(x)
|
||||
feat = self.conv3(self.conv2(feat))
|
||||
rlt_feats = []
|
||||
if return_feats:
|
||||
rlt_feats.append(feat.clone())
|
||||
feat = self.conv5(self.conv4(feat))
|
||||
if return_feats:
|
||||
rlt_feats.append(feat.clone())
|
||||
out = self.final_conv(feat)
|
||||
|
||||
if return_feats:
|
||||
return out, rlt_feats
|
||||
else:
|
||||
return out, None
|
@@ -0,0 +1,370 @@
|
||||
# pylint: skip-file
|
||||
# type: ignore
|
||||
import math
|
||||
import random
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
from torch.nn import functional as F
|
||||
|
||||
from .stylegan2_clean_arch import StyleGAN2GeneratorClean
|
||||
|
||||
|
||||
class StyleGAN2GeneratorCSFT(StyleGAN2GeneratorClean):
|
||||
"""StyleGAN2 Generator with SFT modulation (Spatial Feature Transform).
|
||||
It is the clean version without custom compiled CUDA extensions used in StyleGAN2.
|
||||
Args:
|
||||
out_size (int): The spatial size of outputs.
|
||||
num_style_feat (int): Channel number of style features. Default: 512.
|
||||
num_mlp (int): Layer number of MLP style layers. Default: 8.
|
||||
channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.
|
||||
narrow (float): The narrow ratio for channels. Default: 1.
|
||||
sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
out_size,
|
||||
num_style_feat=512,
|
||||
num_mlp=8,
|
||||
channel_multiplier=2,
|
||||
narrow=1,
|
||||
sft_half=False,
|
||||
):
|
||||
super(StyleGAN2GeneratorCSFT, self).__init__(
|
||||
out_size,
|
||||
num_style_feat=num_style_feat,
|
||||
num_mlp=num_mlp,
|
||||
channel_multiplier=channel_multiplier,
|
||||
narrow=narrow,
|
||||
)
|
||||
self.sft_half = sft_half
|
||||
|
||||
def forward(
|
||||
self,
|
||||
styles,
|
||||
conditions,
|
||||
input_is_latent=False,
|
||||
noise=None,
|
||||
randomize_noise=True,
|
||||
truncation=1,
|
||||
truncation_latent=None,
|
||||
inject_index=None,
|
||||
return_latents=False,
|
||||
):
|
||||
"""Forward function for StyleGAN2GeneratorCSFT.
|
||||
Args:
|
||||
styles (list[Tensor]): Sample codes of styles.
|
||||
conditions (list[Tensor]): SFT conditions to generators.
|
||||
input_is_latent (bool): Whether input is latent style. Default: False.
|
||||
noise (Tensor | None): Input noise or None. Default: None.
|
||||
randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.
|
||||
truncation (float): The truncation ratio. Default: 1.
|
||||
truncation_latent (Tensor | None): The truncation latent tensor. Default: None.
|
||||
inject_index (int | None): The injection index for mixing noise. Default: None.
|
||||
return_latents (bool): Whether to return style latents. Default: False.
|
||||
"""
|
||||
# style codes -> latents with Style MLP layer
|
||||
if not input_is_latent:
|
||||
styles = [self.style_mlp(s) for s in styles]
|
||||
# noises
|
||||
if noise is None:
|
||||
if randomize_noise:
|
||||
noise = [None] * self.num_layers # for each style conv layer
|
||||
else: # use the stored noise
|
||||
noise = [
|
||||
getattr(self.noises, f"noise{i}") for i in range(self.num_layers)
|
||||
]
|
||||
# style truncation
|
||||
if truncation < 1:
|
||||
style_truncation = []
|
||||
for style in styles:
|
||||
style_truncation.append(
|
||||
truncation_latent + truncation * (style - truncation_latent)
|
||||
)
|
||||
styles = style_truncation
|
||||
# get style latents with injection
|
||||
if len(styles) == 1:
|
||||
inject_index = self.num_latent
|
||||
|
||||
if styles[0].ndim < 3:
|
||||
# repeat latent code for all the layers
|
||||
latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
|
||||
else: # used for encoder with different latent code for each layer
|
||||
latent = styles[0]
|
||||
elif len(styles) == 2: # mixing noises
|
||||
if inject_index is None:
|
||||
inject_index = random.randint(1, self.num_latent - 1)
|
||||
latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
|
||||
latent2 = (
|
||||
styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)
|
||||
)
|
||||
latent = torch.cat([latent1, latent2], 1)
|
||||
|
||||
# main generation
|
||||
out = self.constant_input(latent.shape[0])
|
||||
out = self.style_conv1(out, latent[:, 0], noise=noise[0])
|
||||
skip = self.to_rgb1(out, latent[:, 1])
|
||||
|
||||
i = 1
|
||||
for conv1, conv2, noise1, noise2, to_rgb in zip(
|
||||
self.style_convs[::2],
|
||||
self.style_convs[1::2],
|
||||
noise[1::2],
|
||||
noise[2::2],
|
||||
self.to_rgbs,
|
||||
):
|
||||
out = conv1(out, latent[:, i], noise=noise1)
|
||||
|
||||
# the conditions may have fewer levels
|
||||
if i < len(conditions):
|
||||
# SFT part to combine the conditions
|
||||
if self.sft_half: # only apply SFT to half of the channels
|
||||
out_same, out_sft = torch.split(out, int(out.size(1) // 2), dim=1)
|
||||
out_sft = out_sft * conditions[i - 1] + conditions[i]
|
||||
out = torch.cat([out_same, out_sft], dim=1)
|
||||
else: # apply SFT to all the channels
|
||||
out = out * conditions[i - 1] + conditions[i]
|
||||
|
||||
out = conv2(out, latent[:, i + 1], noise=noise2)
|
||||
skip = to_rgb(out, latent[:, i + 2], skip) # feature back to the rgb space
|
||||
i += 2
|
||||
|
||||
image = skip
|
||||
|
||||
if return_latents:
|
||||
return image, latent
|
||||
else:
|
||||
return image, None
|
||||
|
||||
|
||||
class ResBlock(nn.Module):
|
||||
"""Residual block with bilinear upsampling/downsampling.
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
mode (str): Upsampling/downsampling mode. Options: down | up. Default: down.
|
||||
"""
|
||||
|
||||
def __init__(self, in_channels, out_channels, mode="down"):
|
||||
super(ResBlock, self).__init__()
|
||||
|
||||
self.conv1 = nn.Conv2d(in_channels, in_channels, 3, 1, 1)
|
||||
self.conv2 = nn.Conv2d(in_channels, out_channels, 3, 1, 1)
|
||||
self.skip = nn.Conv2d(in_channels, out_channels, 1, bias=False)
|
||||
if mode == "down":
|
||||
self.scale_factor = 0.5
|
||||
elif mode == "up":
|
||||
self.scale_factor = 2
|
||||
|
||||
def forward(self, x):
|
||||
out = F.leaky_relu_(self.conv1(x), negative_slope=0.2)
|
||||
# upsample/downsample
|
||||
out = F.interpolate(
|
||||
out, scale_factor=self.scale_factor, mode="bilinear", align_corners=False
|
||||
)
|
||||
out = F.leaky_relu_(self.conv2(out), negative_slope=0.2)
|
||||
# skip
|
||||
x = F.interpolate(
|
||||
x, scale_factor=self.scale_factor, mode="bilinear", align_corners=False
|
||||
)
|
||||
skip = self.skip(x)
|
||||
out = out + skip
|
||||
return out
|
||||
|
||||
|
||||
class GFPGANv1Clean(nn.Module):
|
||||
"""The GFPGAN architecture: Unet + StyleGAN2 decoder with SFT.
|
||||
It is the clean version without custom compiled CUDA extensions used in StyleGAN2.
|
||||
Ref: GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior.
|
||||
Args:
|
||||
out_size (int): The spatial size of outputs.
|
||||
num_style_feat (int): Channel number of style features. Default: 512.
|
||||
channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.
|
||||
decoder_load_path (str): The path to the pre-trained decoder model (usually, the StyleGAN2). Default: None.
|
||||
fix_decoder (bool): Whether to fix the decoder. Default: True.
|
||||
num_mlp (int): Layer number of MLP style layers. Default: 8.
|
||||
input_is_latent (bool): Whether input is latent style. Default: False.
|
||||
different_w (bool): Whether to use different latent w for different layers. Default: False.
|
||||
narrow (float): The narrow ratio for channels. Default: 1.
|
||||
sft_half (bool): Whether to apply SFT on half of the input channels. Default: False.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
state_dict,
|
||||
):
|
||||
super(GFPGANv1Clean, self).__init__()
|
||||
|
||||
out_size = 512
|
||||
num_style_feat = 512
|
||||
channel_multiplier = 2
|
||||
decoder_load_path = None
|
||||
fix_decoder = False
|
||||
num_mlp = 8
|
||||
input_is_latent = True
|
||||
different_w = True
|
||||
narrow = 1
|
||||
sft_half = True
|
||||
|
||||
self.model_arch = "GFPGAN"
|
||||
self.sub_type = "Face SR"
|
||||
self.scale = 8
|
||||
self.in_nc = 3
|
||||
self.out_nc = 3
|
||||
self.state = state_dict
|
||||
|
||||
self.supports_fp16 = False
|
||||
self.supports_bf16 = True
|
||||
self.min_size_restriction = 512
|
||||
|
||||
self.input_is_latent = input_is_latent
|
||||
self.different_w = different_w
|
||||
self.num_style_feat = num_style_feat
|
||||
|
||||
unet_narrow = narrow * 0.5 # by default, use a half of input channels
|
||||
channels = {
|
||||
"4": int(512 * unet_narrow),
|
||||
"8": int(512 * unet_narrow),
|
||||
"16": int(512 * unet_narrow),
|
||||
"32": int(512 * unet_narrow),
|
||||
"64": int(256 * channel_multiplier * unet_narrow),
|
||||
"128": int(128 * channel_multiplier * unet_narrow),
|
||||
"256": int(64 * channel_multiplier * unet_narrow),
|
||||
"512": int(32 * channel_multiplier * unet_narrow),
|
||||
"1024": int(16 * channel_multiplier * unet_narrow),
|
||||
}
|
||||
|
||||
self.log_size = int(math.log(out_size, 2))
|
||||
first_out_size = 2 ** (int(math.log(out_size, 2)))
|
||||
|
||||
self.conv_body_first = nn.Conv2d(3, channels[f"{first_out_size}"], 1)
|
||||
|
||||
# downsample
|
||||
in_channels = channels[f"{first_out_size}"]
|
||||
self.conv_body_down = nn.ModuleList()
|
||||
for i in range(self.log_size, 2, -1):
|
||||
out_channels = channels[f"{2**(i - 1)}"]
|
||||
self.conv_body_down.append(ResBlock(in_channels, out_channels, mode="down"))
|
||||
in_channels = out_channels
|
||||
|
||||
self.final_conv = nn.Conv2d(in_channels, channels["4"], 3, 1, 1)
|
||||
|
||||
# upsample
|
||||
in_channels = channels["4"]
|
||||
self.conv_body_up = nn.ModuleList()
|
||||
for i in range(3, self.log_size + 1):
|
||||
out_channels = channels[f"{2**i}"]
|
||||
self.conv_body_up.append(ResBlock(in_channels, out_channels, mode="up"))
|
||||
in_channels = out_channels
|
||||
|
||||
# to RGB
|
||||
self.toRGB = nn.ModuleList()
|
||||
for i in range(3, self.log_size + 1):
|
||||
self.toRGB.append(nn.Conv2d(channels[f"{2**i}"], 3, 1))
|
||||
|
||||
if different_w:
|
||||
linear_out_channel = (int(math.log(out_size, 2)) * 2 - 2) * num_style_feat
|
||||
else:
|
||||
linear_out_channel = num_style_feat
|
||||
|
||||
self.final_linear = nn.Linear(channels["4"] * 4 * 4, linear_out_channel)
|
||||
|
||||
# the decoder: stylegan2 generator with SFT modulations
|
||||
self.stylegan_decoder = StyleGAN2GeneratorCSFT(
|
||||
out_size=out_size,
|
||||
num_style_feat=num_style_feat,
|
||||
num_mlp=num_mlp,
|
||||
channel_multiplier=channel_multiplier,
|
||||
narrow=narrow,
|
||||
sft_half=sft_half,
|
||||
)
|
||||
|
||||
# load pre-trained stylegan2 model if necessary
|
||||
if decoder_load_path:
|
||||
self.stylegan_decoder.load_state_dict(
|
||||
torch.load(
|
||||
decoder_load_path, map_location=lambda storage, loc: storage
|
||||
)["params_ema"]
|
||||
)
|
||||
# fix decoder without updating params
|
||||
if fix_decoder:
|
||||
for _, param in self.stylegan_decoder.named_parameters():
|
||||
param.requires_grad = False
|
||||
|
||||
# for SFT modulations (scale and shift)
|
||||
self.condition_scale = nn.ModuleList()
|
||||
self.condition_shift = nn.ModuleList()
|
||||
for i in range(3, self.log_size + 1):
|
||||
out_channels = channels[f"{2**i}"]
|
||||
if sft_half:
|
||||
sft_out_channels = out_channels
|
||||
else:
|
||||
sft_out_channels = out_channels * 2
|
||||
self.condition_scale.append(
|
||||
nn.Sequential(
|
||||
nn.Conv2d(out_channels, out_channels, 3, 1, 1),
|
||||
nn.LeakyReLU(0.2, True),
|
||||
nn.Conv2d(out_channels, sft_out_channels, 3, 1, 1),
|
||||
)
|
||||
)
|
||||
self.condition_shift.append(
|
||||
nn.Sequential(
|
||||
nn.Conv2d(out_channels, out_channels, 3, 1, 1),
|
||||
nn.LeakyReLU(0.2, True),
|
||||
nn.Conv2d(out_channels, sft_out_channels, 3, 1, 1),
|
||||
)
|
||||
)
|
||||
self.load_state_dict(state_dict)
|
||||
|
||||
def forward(
|
||||
self, x, return_latents=False, return_rgb=True, randomize_noise=True, **kwargs
|
||||
):
|
||||
"""Forward function for GFPGANv1Clean.
|
||||
Args:
|
||||
x (Tensor): Input images.
|
||||
return_latents (bool): Whether to return style latents. Default: False.
|
||||
return_rgb (bool): Whether return intermediate rgb images. Default: True.
|
||||
randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.
|
||||
"""
|
||||
conditions = []
|
||||
unet_skips = []
|
||||
out_rgbs = []
|
||||
|
||||
# encoder
|
||||
feat = F.leaky_relu_(self.conv_body_first(x), negative_slope=0.2)
|
||||
for i in range(self.log_size - 2):
|
||||
feat = self.conv_body_down[i](feat)
|
||||
unet_skips.insert(0, feat)
|
||||
feat = F.leaky_relu_(self.final_conv(feat), negative_slope=0.2)
|
||||
|
||||
# style code
|
||||
style_code = self.final_linear(feat.view(feat.size(0), -1))
|
||||
if self.different_w:
|
||||
style_code = style_code.view(style_code.size(0), -1, self.num_style_feat)
|
||||
|
||||
# decode
|
||||
for i in range(self.log_size - 2):
|
||||
# add unet skip
|
||||
feat = feat + unet_skips[i]
|
||||
# ResUpLayer
|
||||
feat = self.conv_body_up[i](feat)
|
||||
# generate scale and shift for SFT layers
|
||||
scale = self.condition_scale[i](feat)
|
||||
conditions.append(scale.clone())
|
||||
shift = self.condition_shift[i](feat)
|
||||
conditions.append(shift.clone())
|
||||
# generate rgb images
|
||||
if return_rgb:
|
||||
out_rgbs.append(self.toRGB[i](feat))
|
||||
|
||||
# decoder
|
||||
image, _ = self.stylegan_decoder(
|
||||
[style_code],
|
||||
conditions,
|
||||
return_latents=return_latents,
|
||||
input_is_latent=self.input_is_latent,
|
||||
randomize_noise=randomize_noise,
|
||||
)
|
||||
|
||||
return image, out_rgbs
|
@@ -0,0 +1,776 @@
|
||||
# pylint: skip-file
|
||||
# type: ignore
|
||||
"""Modified from https://github.com/wzhouxiff/RestoreFormer
|
||||
"""
|
||||
import numpy as np
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
|
||||
|
||||
class VectorQuantizer(nn.Module):
|
||||
"""
|
||||
see https://github.com/MishaLaskin/vqvae/blob/d761a999e2267766400dc646d82d3ac3657771d4/models/quantizer.py
|
||||
____________________________________________
|
||||
Discretization bottleneck part of the VQ-VAE.
|
||||
Inputs:
|
||||
- n_e : number of embeddings
|
||||
- e_dim : dimension of embedding
|
||||
- beta : commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2
|
||||
_____________________________________________
|
||||
"""
|
||||
|
||||
def __init__(self, n_e, e_dim, beta):
|
||||
super(VectorQuantizer, self).__init__()
|
||||
self.n_e = n_e
|
||||
self.e_dim = e_dim
|
||||
self.beta = beta
|
||||
|
||||
self.embedding = nn.Embedding(self.n_e, self.e_dim)
|
||||
self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e)
|
||||
|
||||
def forward(self, z):
|
||||
"""
|
||||
Inputs the output of the encoder network z and maps it to a discrete
|
||||
one-hot vector that is the index of the closest embedding vector e_j
|
||||
z (continuous) -> z_q (discrete)
|
||||
z.shape = (batch, channel, height, width)
|
||||
quantization pipeline:
|
||||
1. get encoder input (B,C,H,W)
|
||||
2. flatten input to (B*H*W,C)
|
||||
"""
|
||||
# reshape z -> (batch, height, width, channel) and flatten
|
||||
z = z.permute(0, 2, 3, 1).contiguous()
|
||||
z_flattened = z.view(-1, self.e_dim)
|
||||
# distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z
|
||||
|
||||
d = (
|
||||
torch.sum(z_flattened**2, dim=1, keepdim=True)
|
||||
+ torch.sum(self.embedding.weight**2, dim=1)
|
||||
- 2 * torch.matmul(z_flattened, self.embedding.weight.t())
|
||||
)
|
||||
|
||||
# could possible replace this here
|
||||
# #\start...
|
||||
# find closest encodings
|
||||
|
||||
min_value, min_encoding_indices = torch.min(d, dim=1)
|
||||
|
||||
min_encoding_indices = min_encoding_indices.unsqueeze(1)
|
||||
|
||||
min_encodings = torch.zeros(min_encoding_indices.shape[0], self.n_e).to(z)
|
||||
min_encodings.scatter_(1, min_encoding_indices, 1)
|
||||
|
||||
# dtype min encodings: torch.float32
|
||||
# min_encodings shape: torch.Size([2048, 512])
|
||||
# min_encoding_indices.shape: torch.Size([2048, 1])
|
||||
|
||||
# get quantized latent vectors
|
||||
z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape)
|
||||
# .........\end
|
||||
|
||||
# with:
|
||||
# .........\start
|
||||
# min_encoding_indices = torch.argmin(d, dim=1)
|
||||
# z_q = self.embedding(min_encoding_indices)
|
||||
# ......\end......... (TODO)
|
||||
|
||||
# compute loss for embedding
|
||||
loss = torch.mean((z_q.detach() - z) ** 2) + self.beta * torch.mean(
|
||||
(z_q - z.detach()) ** 2
|
||||
)
|
||||
|
||||
# preserve gradients
|
||||
z_q = z + (z_q - z).detach()
|
||||
|
||||
# perplexity
|
||||
|
||||
e_mean = torch.mean(min_encodings, dim=0)
|
||||
perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10)))
|
||||
|
||||
# reshape back to match original input shape
|
||||
z_q = z_q.permute(0, 3, 1, 2).contiguous()
|
||||
|
||||
return z_q, loss, (perplexity, min_encodings, min_encoding_indices, d)
|
||||
|
||||
def get_codebook_entry(self, indices, shape):
|
||||
# shape specifying (batch, height, width, channel)
|
||||
# TODO: check for more easy handling with nn.Embedding
|
||||
min_encodings = torch.zeros(indices.shape[0], self.n_e).to(indices)
|
||||
min_encodings.scatter_(1, indices[:, None], 1)
|
||||
|
||||
# get quantized latent vectors
|
||||
z_q = torch.matmul(min_encodings.float(), self.embedding.weight)
|
||||
|
||||
if shape is not None:
|
||||
z_q = z_q.view(shape)
|
||||
|
||||
# reshape back to match original input shape
|
||||
z_q = z_q.permute(0, 3, 1, 2).contiguous()
|
||||
|
||||
return z_q
|
||||
|
||||
|
||||
# pytorch_diffusion + derived encoder decoder
|
||||
def nonlinearity(x):
|
||||
# swish
|
||||
return x * torch.sigmoid(x)
|
||||
|
||||
|
||||
def Normalize(in_channels):
|
||||
return torch.nn.GroupNorm(
|
||||
num_groups=32, num_channels=in_channels, eps=1e-6, affine=True
|
||||
)
|
||||
|
||||
|
||||
class Upsample(nn.Module):
|
||||
def __init__(self, in_channels, with_conv):
|
||||
super().__init__()
|
||||
self.with_conv = with_conv
|
||||
if self.with_conv:
|
||||
self.conv = torch.nn.Conv2d(
|
||||
in_channels, in_channels, kernel_size=3, stride=1, padding=1
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest")
|
||||
if self.with_conv:
|
||||
x = self.conv(x)
|
||||
return x
|
||||
|
||||
|
||||
class Downsample(nn.Module):
|
||||
def __init__(self, in_channels, with_conv):
|
||||
super().__init__()
|
||||
self.with_conv = with_conv
|
||||
if self.with_conv:
|
||||
# no asymmetric padding in torch conv, must do it ourselves
|
||||
self.conv = torch.nn.Conv2d(
|
||||
in_channels, in_channels, kernel_size=3, stride=2, padding=0
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
if self.with_conv:
|
||||
pad = (0, 1, 0, 1)
|
||||
x = torch.nn.functional.pad(x, pad, mode="constant", value=0)
|
||||
x = self.conv(x)
|
||||
else:
|
||||
x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2)
|
||||
return x
|
||||
|
||||
|
||||
class ResnetBlock(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
in_channels,
|
||||
out_channels=None,
|
||||
conv_shortcut=False,
|
||||
dropout,
|
||||
temb_channels=512
|
||||
):
|
||||
super().__init__()
|
||||
self.in_channels = in_channels
|
||||
out_channels = in_channels if out_channels is None else out_channels
|
||||
self.out_channels = out_channels
|
||||
self.use_conv_shortcut = conv_shortcut
|
||||
|
||||
self.norm1 = Normalize(in_channels)
|
||||
self.conv1 = torch.nn.Conv2d(
|
||||
in_channels, out_channels, kernel_size=3, stride=1, padding=1
|
||||
)
|
||||
if temb_channels > 0:
|
||||
self.temb_proj = torch.nn.Linear(temb_channels, out_channels)
|
||||
self.norm2 = Normalize(out_channels)
|
||||
self.dropout = torch.nn.Dropout(dropout)
|
||||
self.conv2 = torch.nn.Conv2d(
|
||||
out_channels, out_channels, kernel_size=3, stride=1, padding=1
|
||||
)
|
||||
if self.in_channels != self.out_channels:
|
||||
if self.use_conv_shortcut:
|
||||
self.conv_shortcut = torch.nn.Conv2d(
|
||||
in_channels, out_channels, kernel_size=3, stride=1, padding=1
|
||||
)
|
||||
else:
|
||||
self.nin_shortcut = torch.nn.Conv2d(
|
||||
in_channels, out_channels, kernel_size=1, stride=1, padding=0
|
||||
)
|
||||
|
||||
def forward(self, x, temb):
|
||||
h = x
|
||||
h = self.norm1(h)
|
||||
h = nonlinearity(h)
|
||||
h = self.conv1(h)
|
||||
|
||||
if temb is not None:
|
||||
h = h + self.temb_proj(nonlinearity(temb))[:, :, None, None]
|
||||
|
||||
h = self.norm2(h)
|
||||
h = nonlinearity(h)
|
||||
h = self.dropout(h)
|
||||
h = self.conv2(h)
|
||||
|
||||
if self.in_channels != self.out_channels:
|
||||
if self.use_conv_shortcut:
|
||||
x = self.conv_shortcut(x)
|
||||
else:
|
||||
x = self.nin_shortcut(x)
|
||||
|
||||
return x + h
|
||||
|
||||
|
||||
class MultiHeadAttnBlock(nn.Module):
|
||||
def __init__(self, in_channels, head_size=1):
|
||||
super().__init__()
|
||||
self.in_channels = in_channels
|
||||
self.head_size = head_size
|
||||
self.att_size = in_channels // head_size
|
||||
assert (
|
||||
in_channels % head_size == 0
|
||||
), "The size of head should be divided by the number of channels."
|
||||
|
||||
self.norm1 = Normalize(in_channels)
|
||||
self.norm2 = Normalize(in_channels)
|
||||
|
||||
self.q = torch.nn.Conv2d(
|
||||
in_channels, in_channels, kernel_size=1, stride=1, padding=0
|
||||
)
|
||||
self.k = torch.nn.Conv2d(
|
||||
in_channels, in_channels, kernel_size=1, stride=1, padding=0
|
||||
)
|
||||
self.v = torch.nn.Conv2d(
|
||||
in_channels, in_channels, kernel_size=1, stride=1, padding=0
|
||||
)
|
||||
self.proj_out = torch.nn.Conv2d(
|
||||
in_channels, in_channels, kernel_size=1, stride=1, padding=0
|
||||
)
|
||||
self.num = 0
|
||||
|
||||
def forward(self, x, y=None):
|
||||
h_ = x
|
||||
h_ = self.norm1(h_)
|
||||
if y is None:
|
||||
y = h_
|
||||
else:
|
||||
y = self.norm2(y)
|
||||
|
||||
q = self.q(y)
|
||||
k = self.k(h_)
|
||||
v = self.v(h_)
|
||||
|
||||
# compute attention
|
||||
b, c, h, w = q.shape
|
||||
q = q.reshape(b, self.head_size, self.att_size, h * w)
|
||||
q = q.permute(0, 3, 1, 2) # b, hw, head, att
|
||||
|
||||
k = k.reshape(b, self.head_size, self.att_size, h * w)
|
||||
k = k.permute(0, 3, 1, 2)
|
||||
|
||||
v = v.reshape(b, self.head_size, self.att_size, h * w)
|
||||
v = v.permute(0, 3, 1, 2)
|
||||
|
||||
q = q.transpose(1, 2)
|
||||
v = v.transpose(1, 2)
|
||||
k = k.transpose(1, 2).transpose(2, 3)
|
||||
|
||||
scale = int(self.att_size) ** (-0.5)
|
||||
q.mul_(scale)
|
||||
w_ = torch.matmul(q, k)
|
||||
w_ = F.softmax(w_, dim=3)
|
||||
|
||||
w_ = w_.matmul(v)
|
||||
|
||||
w_ = w_.transpose(1, 2).contiguous() # [b, h*w, head, att]
|
||||
w_ = w_.view(b, h, w, -1)
|
||||
w_ = w_.permute(0, 3, 1, 2)
|
||||
|
||||
w_ = self.proj_out(w_)
|
||||
|
||||
return x + w_
|
||||
|
||||
|
||||
class MultiHeadEncoder(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
ch,
|
||||
out_ch,
|
||||
ch_mult=(1, 2, 4, 8),
|
||||
num_res_blocks=2,
|
||||
attn_resolutions=(16,),
|
||||
dropout=0.0,
|
||||
resamp_with_conv=True,
|
||||
in_channels=3,
|
||||
resolution=512,
|
||||
z_channels=256,
|
||||
double_z=True,
|
||||
enable_mid=True,
|
||||
head_size=1,
|
||||
**ignore_kwargs
|
||||
):
|
||||
super().__init__()
|
||||
self.ch = ch
|
||||
self.temb_ch = 0
|
||||
self.num_resolutions = len(ch_mult)
|
||||
self.num_res_blocks = num_res_blocks
|
||||
self.resolution = resolution
|
||||
self.in_channels = in_channels
|
||||
self.enable_mid = enable_mid
|
||||
|
||||
# downsampling
|
||||
self.conv_in = torch.nn.Conv2d(
|
||||
in_channels, self.ch, kernel_size=3, stride=1, padding=1
|
||||
)
|
||||
|
||||
curr_res = resolution
|
||||
in_ch_mult = (1,) + tuple(ch_mult)
|
||||
self.down = nn.ModuleList()
|
||||
for i_level in range(self.num_resolutions):
|
||||
block = nn.ModuleList()
|
||||
attn = nn.ModuleList()
|
||||
block_in = ch * in_ch_mult[i_level]
|
||||
block_out = ch * ch_mult[i_level]
|
||||
for i_block in range(self.num_res_blocks):
|
||||
block.append(
|
||||
ResnetBlock(
|
||||
in_channels=block_in,
|
||||
out_channels=block_out,
|
||||
temb_channels=self.temb_ch,
|
||||
dropout=dropout,
|
||||
)
|
||||
)
|
||||
block_in = block_out
|
||||
if curr_res in attn_resolutions:
|
||||
attn.append(MultiHeadAttnBlock(block_in, head_size))
|
||||
down = nn.Module()
|
||||
down.block = block
|
||||
down.attn = attn
|
||||
if i_level != self.num_resolutions - 1:
|
||||
down.downsample = Downsample(block_in, resamp_with_conv)
|
||||
curr_res = curr_res // 2
|
||||
self.down.append(down)
|
||||
|
||||
# middle
|
||||
if self.enable_mid:
|
||||
self.mid = nn.Module()
|
||||
self.mid.block_1 = ResnetBlock(
|
||||
in_channels=block_in,
|
||||
out_channels=block_in,
|
||||
temb_channels=self.temb_ch,
|
||||
dropout=dropout,
|
||||
)
|
||||
self.mid.attn_1 = MultiHeadAttnBlock(block_in, head_size)
|
||||
self.mid.block_2 = ResnetBlock(
|
||||
in_channels=block_in,
|
||||
out_channels=block_in,
|
||||
temb_channels=self.temb_ch,
|
||||
dropout=dropout,
|
||||
)
|
||||
|
||||
# end
|
||||
self.norm_out = Normalize(block_in)
|
||||
self.conv_out = torch.nn.Conv2d(
|
||||
block_in,
|
||||
2 * z_channels if double_z else z_channels,
|
||||
kernel_size=3,
|
||||
stride=1,
|
||||
padding=1,
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
hs = {}
|
||||
# timestep embedding
|
||||
temb = None
|
||||
|
||||
# downsampling
|
||||
h = self.conv_in(x)
|
||||
hs["in"] = h
|
||||
for i_level in range(self.num_resolutions):
|
||||
for i_block in range(self.num_res_blocks):
|
||||
h = self.down[i_level].block[i_block](h, temb)
|
||||
if len(self.down[i_level].attn) > 0:
|
||||
h = self.down[i_level].attn[i_block](h)
|
||||
|
||||
if i_level != self.num_resolutions - 1:
|
||||
# hs.append(h)
|
||||
hs["block_" + str(i_level)] = h
|
||||
h = self.down[i_level].downsample(h)
|
||||
|
||||
# middle
|
||||
# h = hs[-1]
|
||||
if self.enable_mid:
|
||||
h = self.mid.block_1(h, temb)
|
||||
hs["block_" + str(i_level) + "_atten"] = h
|
||||
h = self.mid.attn_1(h)
|
||||
h = self.mid.block_2(h, temb)
|
||||
hs["mid_atten"] = h
|
||||
|
||||
# end
|
||||
h = self.norm_out(h)
|
||||
h = nonlinearity(h)
|
||||
h = self.conv_out(h)
|
||||
# hs.append(h)
|
||||
hs["out"] = h
|
||||
|
||||
return hs
|
||||
|
||||
|
||||
class MultiHeadDecoder(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
ch,
|
||||
out_ch,
|
||||
ch_mult=(1, 2, 4, 8),
|
||||
num_res_blocks=2,
|
||||
attn_resolutions=(16,),
|
||||
dropout=0.0,
|
||||
resamp_with_conv=True,
|
||||
in_channels=3,
|
||||
resolution=512,
|
||||
z_channels=256,
|
||||
give_pre_end=False,
|
||||
enable_mid=True,
|
||||
head_size=1,
|
||||
**ignorekwargs
|
||||
):
|
||||
super().__init__()
|
||||
self.ch = ch
|
||||
self.temb_ch = 0
|
||||
self.num_resolutions = len(ch_mult)
|
||||
self.num_res_blocks = num_res_blocks
|
||||
self.resolution = resolution
|
||||
self.in_channels = in_channels
|
||||
self.give_pre_end = give_pre_end
|
||||
self.enable_mid = enable_mid
|
||||
|
||||
# compute in_ch_mult, block_in and curr_res at lowest res
|
||||
block_in = ch * ch_mult[self.num_resolutions - 1]
|
||||
curr_res = resolution // 2 ** (self.num_resolutions - 1)
|
||||
self.z_shape = (1, z_channels, curr_res, curr_res)
|
||||
print(
|
||||
"Working with z of shape {} = {} dimensions.".format(
|
||||
self.z_shape, np.prod(self.z_shape)
|
||||
)
|
||||
)
|
||||
|
||||
# z to block_in
|
||||
self.conv_in = torch.nn.Conv2d(
|
||||
z_channels, block_in, kernel_size=3, stride=1, padding=1
|
||||
)
|
||||
|
||||
# middle
|
||||
if self.enable_mid:
|
||||
self.mid = nn.Module()
|
||||
self.mid.block_1 = ResnetBlock(
|
||||
in_channels=block_in,
|
||||
out_channels=block_in,
|
||||
temb_channels=self.temb_ch,
|
||||
dropout=dropout,
|
||||
)
|
||||
self.mid.attn_1 = MultiHeadAttnBlock(block_in, head_size)
|
||||
self.mid.block_2 = ResnetBlock(
|
||||
in_channels=block_in,
|
||||
out_channels=block_in,
|
||||
temb_channels=self.temb_ch,
|
||||
dropout=dropout,
|
||||
)
|
||||
|
||||
# upsampling
|
||||
self.up = nn.ModuleList()
|
||||
for i_level in reversed(range(self.num_resolutions)):
|
||||
block = nn.ModuleList()
|
||||
attn = nn.ModuleList()
|
||||
block_out = ch * ch_mult[i_level]
|
||||
for i_block in range(self.num_res_blocks + 1):
|
||||
block.append(
|
||||
ResnetBlock(
|
||||
in_channels=block_in,
|
||||
out_channels=block_out,
|
||||
temb_channels=self.temb_ch,
|
||||
dropout=dropout,
|
||||
)
|
||||
)
|
||||
block_in = block_out
|
||||
if curr_res in attn_resolutions:
|
||||
attn.append(MultiHeadAttnBlock(block_in, head_size))
|
||||
up = nn.Module()
|
||||
up.block = block
|
||||
up.attn = attn
|
||||
if i_level != 0:
|
||||
up.upsample = Upsample(block_in, resamp_with_conv)
|
||||
curr_res = curr_res * 2
|
||||
self.up.insert(0, up) # prepend to get consistent order
|
||||
|
||||
# end
|
||||
self.norm_out = Normalize(block_in)
|
||||
self.conv_out = torch.nn.Conv2d(
|
||||
block_in, out_ch, kernel_size=3, stride=1, padding=1
|
||||
)
|
||||
|
||||
def forward(self, z):
|
||||
# assert z.shape[1:] == self.z_shape[1:]
|
||||
self.last_z_shape = z.shape
|
||||
|
||||
# timestep embedding
|
||||
temb = None
|
||||
|
||||
# z to block_in
|
||||
h = self.conv_in(z)
|
||||
|
||||
# middle
|
||||
if self.enable_mid:
|
||||
h = self.mid.block_1(h, temb)
|
||||
h = self.mid.attn_1(h)
|
||||
h = self.mid.block_2(h, temb)
|
||||
|
||||
# upsampling
|
||||
for i_level in reversed(range(self.num_resolutions)):
|
||||
for i_block in range(self.num_res_blocks + 1):
|
||||
h = self.up[i_level].block[i_block](h, temb)
|
||||
if len(self.up[i_level].attn) > 0:
|
||||
h = self.up[i_level].attn[i_block](h)
|
||||
if i_level != 0:
|
||||
h = self.up[i_level].upsample(h)
|
||||
|
||||
# end
|
||||
if self.give_pre_end:
|
||||
return h
|
||||
|
||||
h = self.norm_out(h)
|
||||
h = nonlinearity(h)
|
||||
h = self.conv_out(h)
|
||||
return h
|
||||
|
||||
|
||||
class MultiHeadDecoderTransformer(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
ch,
|
||||
out_ch,
|
||||
ch_mult=(1, 2, 4, 8),
|
||||
num_res_blocks=2,
|
||||
attn_resolutions=(16,),
|
||||
dropout=0.0,
|
||||
resamp_with_conv=True,
|
||||
in_channels=3,
|
||||
resolution=512,
|
||||
z_channels=256,
|
||||
give_pre_end=False,
|
||||
enable_mid=True,
|
||||
head_size=1,
|
||||
**ignorekwargs
|
||||
):
|
||||
super().__init__()
|
||||
self.ch = ch
|
||||
self.temb_ch = 0
|
||||
self.num_resolutions = len(ch_mult)
|
||||
self.num_res_blocks = num_res_blocks
|
||||
self.resolution = resolution
|
||||
self.in_channels = in_channels
|
||||
self.give_pre_end = give_pre_end
|
||||
self.enable_mid = enable_mid
|
||||
|
||||
# compute in_ch_mult, block_in and curr_res at lowest res
|
||||
block_in = ch * ch_mult[self.num_resolutions - 1]
|
||||
curr_res = resolution // 2 ** (self.num_resolutions - 1)
|
||||
self.z_shape = (1, z_channels, curr_res, curr_res)
|
||||
print(
|
||||
"Working with z of shape {} = {} dimensions.".format(
|
||||
self.z_shape, np.prod(self.z_shape)
|
||||
)
|
||||
)
|
||||
|
||||
# z to block_in
|
||||
self.conv_in = torch.nn.Conv2d(
|
||||
z_channels, block_in, kernel_size=3, stride=1, padding=1
|
||||
)
|
||||
|
||||
# middle
|
||||
if self.enable_mid:
|
||||
self.mid = nn.Module()
|
||||
self.mid.block_1 = ResnetBlock(
|
||||
in_channels=block_in,
|
||||
out_channels=block_in,
|
||||
temb_channels=self.temb_ch,
|
||||
dropout=dropout,
|
||||
)
|
||||
self.mid.attn_1 = MultiHeadAttnBlock(block_in, head_size)
|
||||
self.mid.block_2 = ResnetBlock(
|
||||
in_channels=block_in,
|
||||
out_channels=block_in,
|
||||
temb_channels=self.temb_ch,
|
||||
dropout=dropout,
|
||||
)
|
||||
|
||||
# upsampling
|
||||
self.up = nn.ModuleList()
|
||||
for i_level in reversed(range(self.num_resolutions)):
|
||||
block = nn.ModuleList()
|
||||
attn = nn.ModuleList()
|
||||
block_out = ch * ch_mult[i_level]
|
||||
for i_block in range(self.num_res_blocks + 1):
|
||||
block.append(
|
||||
ResnetBlock(
|
||||
in_channels=block_in,
|
||||
out_channels=block_out,
|
||||
temb_channels=self.temb_ch,
|
||||
dropout=dropout,
|
||||
)
|
||||
)
|
||||
block_in = block_out
|
||||
if curr_res in attn_resolutions:
|
||||
attn.append(MultiHeadAttnBlock(block_in, head_size))
|
||||
up = nn.Module()
|
||||
up.block = block
|
||||
up.attn = attn
|
||||
if i_level != 0:
|
||||
up.upsample = Upsample(block_in, resamp_with_conv)
|
||||
curr_res = curr_res * 2
|
||||
self.up.insert(0, up) # prepend to get consistent order
|
||||
|
||||
# end
|
||||
self.norm_out = Normalize(block_in)
|
||||
self.conv_out = torch.nn.Conv2d(
|
||||
block_in, out_ch, kernel_size=3, stride=1, padding=1
|
||||
)
|
||||
|
||||
def forward(self, z, hs):
|
||||
# assert z.shape[1:] == self.z_shape[1:]
|
||||
# self.last_z_shape = z.shape
|
||||
|
||||
# timestep embedding
|
||||
temb = None
|
||||
|
||||
# z to block_in
|
||||
h = self.conv_in(z)
|
||||
|
||||
# middle
|
||||
if self.enable_mid:
|
||||
h = self.mid.block_1(h, temb)
|
||||
h = self.mid.attn_1(h, hs["mid_atten"])
|
||||
h = self.mid.block_2(h, temb)
|
||||
|
||||
# upsampling
|
||||
for i_level in reversed(range(self.num_resolutions)):
|
||||
for i_block in range(self.num_res_blocks + 1):
|
||||
h = self.up[i_level].block[i_block](h, temb)
|
||||
if len(self.up[i_level].attn) > 0:
|
||||
h = self.up[i_level].attn[i_block](
|
||||
h, hs["block_" + str(i_level) + "_atten"]
|
||||
)
|
||||
# hfeature = h.clone()
|
||||
if i_level != 0:
|
||||
h = self.up[i_level].upsample(h)
|
||||
|
||||
# end
|
||||
if self.give_pre_end:
|
||||
return h
|
||||
|
||||
h = self.norm_out(h)
|
||||
h = nonlinearity(h)
|
||||
h = self.conv_out(h)
|
||||
return h
|
||||
|
||||
|
||||
class RestoreFormer(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
state_dict,
|
||||
):
|
||||
super(RestoreFormer, self).__init__()
|
||||
|
||||
n_embed = 1024
|
||||
embed_dim = 256
|
||||
ch = 64
|
||||
out_ch = 3
|
||||
ch_mult = (1, 2, 2, 4, 4, 8)
|
||||
num_res_blocks = 2
|
||||
attn_resolutions = (16,)
|
||||
dropout = 0.0
|
||||
in_channels = 3
|
||||
resolution = 512
|
||||
z_channels = 256
|
||||
double_z = False
|
||||
enable_mid = True
|
||||
fix_decoder = False
|
||||
fix_codebook = True
|
||||
fix_encoder = False
|
||||
head_size = 8
|
||||
|
||||
self.model_arch = "RestoreFormer"
|
||||
self.sub_type = "Face SR"
|
||||
self.scale = 8
|
||||
self.in_nc = 3
|
||||
self.out_nc = out_ch
|
||||
self.state = state_dict
|
||||
|
||||
self.supports_fp16 = False
|
||||
self.supports_bf16 = True
|
||||
self.min_size_restriction = 16
|
||||
|
||||
self.encoder = MultiHeadEncoder(
|
||||
ch=ch,
|
||||
out_ch=out_ch,
|
||||
ch_mult=ch_mult,
|
||||
num_res_blocks=num_res_blocks,
|
||||
attn_resolutions=attn_resolutions,
|
||||
dropout=dropout,
|
||||
in_channels=in_channels,
|
||||
resolution=resolution,
|
||||
z_channels=z_channels,
|
||||
double_z=double_z,
|
||||
enable_mid=enable_mid,
|
||||
head_size=head_size,
|
||||
)
|
||||
self.decoder = MultiHeadDecoderTransformer(
|
||||
ch=ch,
|
||||
out_ch=out_ch,
|
||||
ch_mult=ch_mult,
|
||||
num_res_blocks=num_res_blocks,
|
||||
attn_resolutions=attn_resolutions,
|
||||
dropout=dropout,
|
||||
in_channels=in_channels,
|
||||
resolution=resolution,
|
||||
z_channels=z_channels,
|
||||
enable_mid=enable_mid,
|
||||
head_size=head_size,
|
||||
)
|
||||
|
||||
self.quantize = VectorQuantizer(n_embed, embed_dim, beta=0.25)
|
||||
|
||||
self.quant_conv = torch.nn.Conv2d(z_channels, embed_dim, 1)
|
||||
self.post_quant_conv = torch.nn.Conv2d(embed_dim, z_channels, 1)
|
||||
|
||||
if fix_decoder:
|
||||
for _, param in self.decoder.named_parameters():
|
||||
param.requires_grad = False
|
||||
for _, param in self.post_quant_conv.named_parameters():
|
||||
param.requires_grad = False
|
||||
for _, param in self.quantize.named_parameters():
|
||||
param.requires_grad = False
|
||||
elif fix_codebook:
|
||||
for _, param in self.quantize.named_parameters():
|
||||
param.requires_grad = False
|
||||
|
||||
if fix_encoder:
|
||||
for _, param in self.encoder.named_parameters():
|
||||
param.requires_grad = False
|
||||
|
||||
self.load_state_dict(state_dict)
|
||||
|
||||
def encode(self, x):
|
||||
hs = self.encoder(x)
|
||||
h = self.quant_conv(hs["out"])
|
||||
quant, emb_loss, info = self.quantize(h)
|
||||
return quant, emb_loss, info, hs
|
||||
|
||||
def decode(self, quant, hs):
|
||||
quant = self.post_quant_conv(quant)
|
||||
dec = self.decoder(quant, hs)
|
||||
|
||||
return dec
|
||||
|
||||
def forward(self, input, **kwargs):
|
||||
quant, diff, info, hs = self.encode(input)
|
||||
dec = self.decode(quant, hs)
|
||||
|
||||
return dec, None
|
865
comfy_extras/chainner_models/architecture/face/stylegan2_arch.py
Normal file
865
comfy_extras/chainner_models/architecture/face/stylegan2_arch.py
Normal file
@@ -0,0 +1,865 @@
|
||||
# pylint: skip-file
|
||||
# type: ignore
|
||||
import math
|
||||
import random
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
from torch.nn import functional as F
|
||||
|
||||
from .fused_act import FusedLeakyReLU, fused_leaky_relu
|
||||
from .upfirdn2d import upfirdn2d
|
||||
|
||||
|
||||
class NormStyleCode(nn.Module):
|
||||
def forward(self, x):
|
||||
"""Normalize the style codes.
|
||||
|
||||
Args:
|
||||
x (Tensor): Style codes with shape (b, c).
|
||||
|
||||
Returns:
|
||||
Tensor: Normalized tensor.
|
||||
"""
|
||||
return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8)
|
||||
|
||||
|
||||
def make_resample_kernel(k):
|
||||
"""Make resampling kernel for UpFirDn.
|
||||
|
||||
Args:
|
||||
k (list[int]): A list indicating the 1D resample kernel magnitude.
|
||||
|
||||
Returns:
|
||||
Tensor: 2D resampled kernel.
|
||||
"""
|
||||
k = torch.tensor(k, dtype=torch.float32)
|
||||
if k.ndim == 1:
|
||||
k = k[None, :] * k[:, None] # to 2D kernel, outer product
|
||||
# normalize
|
||||
k /= k.sum()
|
||||
return k
|
||||
|
||||
|
||||
class UpFirDnUpsample(nn.Module):
|
||||
"""Upsample, FIR filter, and downsample (upsampole version).
|
||||
|
||||
References:
|
||||
1. https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.upfirdn.html # noqa: E501
|
||||
2. http://www.ece.northwestern.edu/local-apps/matlabhelp/toolbox/signal/upfirdn.html # noqa: E501
|
||||
|
||||
Args:
|
||||
resample_kernel (list[int]): A list indicating the 1D resample kernel
|
||||
magnitude.
|
||||
factor (int): Upsampling scale factor. Default: 2.
|
||||
"""
|
||||
|
||||
def __init__(self, resample_kernel, factor=2):
|
||||
super(UpFirDnUpsample, self).__init__()
|
||||
self.kernel = make_resample_kernel(resample_kernel) * (factor**2)
|
||||
self.factor = factor
|
||||
|
||||
pad = self.kernel.shape[0] - factor
|
||||
self.pad = ((pad + 1) // 2 + factor - 1, pad // 2)
|
||||
|
||||
def forward(self, x):
|
||||
out = upfirdn2d(x, self.kernel.type_as(x), up=self.factor, down=1, pad=self.pad)
|
||||
return out
|
||||
|
||||
def __repr__(self):
|
||||
return f"{self.__class__.__name__}(factor={self.factor})"
|
||||
|
||||
|
||||
class UpFirDnDownsample(nn.Module):
|
||||
"""Upsample, FIR filter, and downsample (downsampole version).
|
||||
|
||||
Args:
|
||||
resample_kernel (list[int]): A list indicating the 1D resample kernel
|
||||
magnitude.
|
||||
factor (int): Downsampling scale factor. Default: 2.
|
||||
"""
|
||||
|
||||
def __init__(self, resample_kernel, factor=2):
|
||||
super(UpFirDnDownsample, self).__init__()
|
||||
self.kernel = make_resample_kernel(resample_kernel)
|
||||
self.factor = factor
|
||||
|
||||
pad = self.kernel.shape[0] - factor
|
||||
self.pad = ((pad + 1) // 2, pad // 2)
|
||||
|
||||
def forward(self, x):
|
||||
out = upfirdn2d(x, self.kernel.type_as(x), up=1, down=self.factor, pad=self.pad)
|
||||
return out
|
||||
|
||||
def __repr__(self):
|
||||
return f"{self.__class__.__name__}(factor={self.factor})"
|
||||
|
||||
|
||||
class UpFirDnSmooth(nn.Module):
|
||||
"""Upsample, FIR filter, and downsample (smooth version).
|
||||
|
||||
Args:
|
||||
resample_kernel (list[int]): A list indicating the 1D resample kernel
|
||||
magnitude.
|
||||
upsample_factor (int): Upsampling scale factor. Default: 1.
|
||||
downsample_factor (int): Downsampling scale factor. Default: 1.
|
||||
kernel_size (int): Kernel size: Default: 1.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, resample_kernel, upsample_factor=1, downsample_factor=1, kernel_size=1
|
||||
):
|
||||
super(UpFirDnSmooth, self).__init__()
|
||||
self.upsample_factor = upsample_factor
|
||||
self.downsample_factor = downsample_factor
|
||||
self.kernel = make_resample_kernel(resample_kernel)
|
||||
if upsample_factor > 1:
|
||||
self.kernel = self.kernel * (upsample_factor**2)
|
||||
|
||||
if upsample_factor > 1:
|
||||
pad = (self.kernel.shape[0] - upsample_factor) - (kernel_size - 1)
|
||||
self.pad = ((pad + 1) // 2 + upsample_factor - 1, pad // 2 + 1)
|
||||
elif downsample_factor > 1:
|
||||
pad = (self.kernel.shape[0] - downsample_factor) + (kernel_size - 1)
|
||||
self.pad = ((pad + 1) // 2, pad // 2)
|
||||
else:
|
||||
raise NotImplementedError
|
||||
|
||||
def forward(self, x):
|
||||
out = upfirdn2d(x, self.kernel.type_as(x), up=1, down=1, pad=self.pad)
|
||||
return out
|
||||
|
||||
def __repr__(self):
|
||||
return (
|
||||
f"{self.__class__.__name__}(upsample_factor={self.upsample_factor}"
|
||||
f", downsample_factor={self.downsample_factor})"
|
||||
)
|
||||
|
||||
|
||||
class EqualLinear(nn.Module):
|
||||
"""Equalized Linear as StyleGAN2.
|
||||
|
||||
Args:
|
||||
in_channels (int): Size of each sample.
|
||||
out_channels (int): Size of each output sample.
|
||||
bias (bool): If set to ``False``, the layer will not learn an additive
|
||||
bias. Default: ``True``.
|
||||
bias_init_val (float): Bias initialized value. Default: 0.
|
||||
lr_mul (float): Learning rate multiplier. Default: 1.
|
||||
activation (None | str): The activation after ``linear`` operation.
|
||||
Supported: 'fused_lrelu', None. Default: None.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
lr_mul=1,
|
||||
activation=None,
|
||||
):
|
||||
super(EqualLinear, self).__init__()
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = out_channels
|
||||
self.lr_mul = lr_mul
|
||||
self.activation = activation
|
||||
if self.activation not in ["fused_lrelu", None]:
|
||||
raise ValueError(
|
||||
f"Wrong activation value in EqualLinear: {activation}"
|
||||
"Supported ones are: ['fused_lrelu', None]."
|
||||
)
|
||||
self.scale = (1 / math.sqrt(in_channels)) * lr_mul
|
||||
|
||||
self.weight = nn.Parameter(torch.randn(out_channels, in_channels).div_(lr_mul))
|
||||
if bias:
|
||||
self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val))
|
||||
else:
|
||||
self.register_parameter("bias", None)
|
||||
|
||||
def forward(self, x):
|
||||
if self.bias is None:
|
||||
bias = None
|
||||
else:
|
||||
bias = self.bias * self.lr_mul
|
||||
if self.activation == "fused_lrelu":
|
||||
out = F.linear(x, self.weight * self.scale)
|
||||
out = fused_leaky_relu(out, bias)
|
||||
else:
|
||||
out = F.linear(x, self.weight * self.scale, bias=bias)
|
||||
return out
|
||||
|
||||
def __repr__(self):
|
||||
return (
|
||||
f"{self.__class__.__name__}(in_channels={self.in_channels}, "
|
||||
f"out_channels={self.out_channels}, bias={self.bias is not None})"
|
||||
)
|
||||
|
||||
|
||||
class ModulatedConv2d(nn.Module):
|
||||
"""Modulated Conv2d used in StyleGAN2.
|
||||
|
||||
There is no bias in ModulatedConv2d.
|
||||
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
kernel_size (int): Size of the convolving kernel.
|
||||
num_style_feat (int): Channel number of style features.
|
||||
demodulate (bool): Whether to demodulate in the conv layer.
|
||||
Default: True.
|
||||
sample_mode (str | None): Indicating 'upsample', 'downsample' or None.
|
||||
Default: None.
|
||||
resample_kernel (list[int]): A list indicating the 1D resample kernel
|
||||
magnitude. Default: (1, 3, 3, 1).
|
||||
eps (float): A value added to the denominator for numerical stability.
|
||||
Default: 1e-8.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode=None,
|
||||
resample_kernel=(1, 3, 3, 1),
|
||||
eps=1e-8,
|
||||
):
|
||||
super(ModulatedConv2d, self).__init__()
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = out_channels
|
||||
self.kernel_size = kernel_size
|
||||
self.demodulate = demodulate
|
||||
self.sample_mode = sample_mode
|
||||
self.eps = eps
|
||||
|
||||
if self.sample_mode == "upsample":
|
||||
self.smooth = UpFirDnSmooth(
|
||||
resample_kernel,
|
||||
upsample_factor=2,
|
||||
downsample_factor=1,
|
||||
kernel_size=kernel_size,
|
||||
)
|
||||
elif self.sample_mode == "downsample":
|
||||
self.smooth = UpFirDnSmooth(
|
||||
resample_kernel,
|
||||
upsample_factor=1,
|
||||
downsample_factor=2,
|
||||
kernel_size=kernel_size,
|
||||
)
|
||||
elif self.sample_mode is None:
|
||||
pass
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Wrong sample mode {self.sample_mode}, "
|
||||
"supported ones are ['upsample', 'downsample', None]."
|
||||
)
|
||||
|
||||
self.scale = 1 / math.sqrt(in_channels * kernel_size**2)
|
||||
# modulation inside each modulated conv
|
||||
self.modulation = EqualLinear(
|
||||
num_style_feat,
|
||||
in_channels,
|
||||
bias=True,
|
||||
bias_init_val=1,
|
||||
lr_mul=1,
|
||||
activation=None,
|
||||
)
|
||||
|
||||
self.weight = nn.Parameter(
|
||||
torch.randn(1, out_channels, in_channels, kernel_size, kernel_size)
|
||||
)
|
||||
self.padding = kernel_size // 2
|
||||
|
||||
def forward(self, x, style):
|
||||
"""Forward function.
|
||||
|
||||
Args:
|
||||
x (Tensor): Tensor with shape (b, c, h, w).
|
||||
style (Tensor): Tensor with shape (b, num_style_feat).
|
||||
|
||||
Returns:
|
||||
Tensor: Modulated tensor after convolution.
|
||||
"""
|
||||
b, c, h, w = x.shape # c = c_in
|
||||
# weight modulation
|
||||
style = self.modulation(style).view(b, 1, c, 1, 1)
|
||||
# self.weight: (1, c_out, c_in, k, k); style: (b, 1, c, 1, 1)
|
||||
weight = self.scale * self.weight * style # (b, c_out, c_in, k, k)
|
||||
|
||||
if self.demodulate:
|
||||
demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + self.eps)
|
||||
weight = weight * demod.view(b, self.out_channels, 1, 1, 1)
|
||||
|
||||
weight = weight.view(
|
||||
b * self.out_channels, c, self.kernel_size, self.kernel_size
|
||||
)
|
||||
|
||||
if self.sample_mode == "upsample":
|
||||
x = x.view(1, b * c, h, w)
|
||||
weight = weight.view(
|
||||
b, self.out_channels, c, self.kernel_size, self.kernel_size
|
||||
)
|
||||
weight = weight.transpose(1, 2).reshape(
|
||||
b * c, self.out_channels, self.kernel_size, self.kernel_size
|
||||
)
|
||||
out = F.conv_transpose2d(x, weight, padding=0, stride=2, groups=b)
|
||||
out = out.view(b, self.out_channels, *out.shape[2:4])
|
||||
out = self.smooth(out)
|
||||
elif self.sample_mode == "downsample":
|
||||
x = self.smooth(x)
|
||||
x = x.view(1, b * c, *x.shape[2:4])
|
||||
out = F.conv2d(x, weight, padding=0, stride=2, groups=b)
|
||||
out = out.view(b, self.out_channels, *out.shape[2:4])
|
||||
else:
|
||||
x = x.view(1, b * c, h, w)
|
||||
# weight: (b*c_out, c_in, k, k), groups=b
|
||||
out = F.conv2d(x, weight, padding=self.padding, groups=b)
|
||||
out = out.view(b, self.out_channels, *out.shape[2:4])
|
||||
|
||||
return out
|
||||
|
||||
def __repr__(self):
|
||||
return (
|
||||
f"{self.__class__.__name__}(in_channels={self.in_channels}, "
|
||||
f"out_channels={self.out_channels}, "
|
||||
f"kernel_size={self.kernel_size}, "
|
||||
f"demodulate={self.demodulate}, sample_mode={self.sample_mode})"
|
||||
)
|
||||
|
||||
|
||||
class StyleConv(nn.Module):
|
||||
"""Style conv.
|
||||
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
kernel_size (int): Size of the convolving kernel.
|
||||
num_style_feat (int): Channel number of style features.
|
||||
demodulate (bool): Whether demodulate in the conv layer. Default: True.
|
||||
sample_mode (str | None): Indicating 'upsample', 'downsample' or None.
|
||||
Default: None.
|
||||
resample_kernel (list[int]): A list indicating the 1D resample kernel
|
||||
magnitude. Default: (1, 3, 3, 1).
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode=None,
|
||||
resample_kernel=(1, 3, 3, 1),
|
||||
):
|
||||
super(StyleConv, self).__init__()
|
||||
self.modulated_conv = ModulatedConv2d(
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
num_style_feat,
|
||||
demodulate=demodulate,
|
||||
sample_mode=sample_mode,
|
||||
resample_kernel=resample_kernel,
|
||||
)
|
||||
self.weight = nn.Parameter(torch.zeros(1)) # for noise injection
|
||||
self.activate = FusedLeakyReLU(out_channels)
|
||||
|
||||
def forward(self, x, style, noise=None):
|
||||
# modulate
|
||||
out = self.modulated_conv(x, style)
|
||||
# noise injection
|
||||
if noise is None:
|
||||
b, _, h, w = out.shape
|
||||
noise = out.new_empty(b, 1, h, w).normal_()
|
||||
out = out + self.weight * noise
|
||||
# activation (with bias)
|
||||
out = self.activate(out)
|
||||
return out
|
||||
|
||||
|
||||
class ToRGB(nn.Module):
|
||||
"""To RGB from features.
|
||||
|
||||
Args:
|
||||
in_channels (int): Channel number of input.
|
||||
num_style_feat (int): Channel number of style features.
|
||||
upsample (bool): Whether to upsample. Default: True.
|
||||
resample_kernel (list[int]): A list indicating the 1D resample kernel
|
||||
magnitude. Default: (1, 3, 3, 1).
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, in_channels, num_style_feat, upsample=True, resample_kernel=(1, 3, 3, 1)
|
||||
):
|
||||
super(ToRGB, self).__init__()
|
||||
if upsample:
|
||||
self.upsample = UpFirDnUpsample(resample_kernel, factor=2)
|
||||
else:
|
||||
self.upsample = None
|
||||
self.modulated_conv = ModulatedConv2d(
|
||||
in_channels,
|
||||
3,
|
||||
kernel_size=1,
|
||||
num_style_feat=num_style_feat,
|
||||
demodulate=False,
|
||||
sample_mode=None,
|
||||
)
|
||||
self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
|
||||
|
||||
def forward(self, x, style, skip=None):
|
||||
"""Forward function.
|
||||
|
||||
Args:
|
||||
x (Tensor): Feature tensor with shape (b, c, h, w).
|
||||
style (Tensor): Tensor with shape (b, num_style_feat).
|
||||
skip (Tensor): Base/skip tensor. Default: None.
|
||||
|
||||
Returns:
|
||||
Tensor: RGB images.
|
||||
"""
|
||||
out = self.modulated_conv(x, style)
|
||||
out = out + self.bias
|
||||
if skip is not None:
|
||||
if self.upsample:
|
||||
skip = self.upsample(skip)
|
||||
out = out + skip
|
||||
return out
|
||||
|
||||
|
||||
class ConstantInput(nn.Module):
|
||||
"""Constant input.
|
||||
|
||||
Args:
|
||||
num_channel (int): Channel number of constant input.
|
||||
size (int): Spatial size of constant input.
|
||||
"""
|
||||
|
||||
def __init__(self, num_channel, size):
|
||||
super(ConstantInput, self).__init__()
|
||||
self.weight = nn.Parameter(torch.randn(1, num_channel, size, size))
|
||||
|
||||
def forward(self, batch):
|
||||
out = self.weight.repeat(batch, 1, 1, 1)
|
||||
return out
|
||||
|
||||
|
||||
class StyleGAN2Generator(nn.Module):
|
||||
"""StyleGAN2 Generator.
|
||||
|
||||
Args:
|
||||
out_size (int): The spatial size of outputs.
|
||||
num_style_feat (int): Channel number of style features. Default: 512.
|
||||
num_mlp (int): Layer number of MLP style layers. Default: 8.
|
||||
channel_multiplier (int): Channel multiplier for large networks of
|
||||
StyleGAN2. Default: 2.
|
||||
resample_kernel (list[int]): A list indicating the 1D resample kernel
|
||||
magnitude. A cross production will be applied to extent 1D resample
|
||||
kernel to 2D resample kernel. Default: (1, 3, 3, 1).
|
||||
lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01.
|
||||
narrow (float): Narrow ratio for channels. Default: 1.0.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
out_size,
|
||||
num_style_feat=512,
|
||||
num_mlp=8,
|
||||
channel_multiplier=2,
|
||||
resample_kernel=(1, 3, 3, 1),
|
||||
lr_mlp=0.01,
|
||||
narrow=1,
|
||||
):
|
||||
super(StyleGAN2Generator, self).__init__()
|
||||
# Style MLP layers
|
||||
self.num_style_feat = num_style_feat
|
||||
style_mlp_layers = [NormStyleCode()]
|
||||
for i in range(num_mlp):
|
||||
style_mlp_layers.append(
|
||||
EqualLinear(
|
||||
num_style_feat,
|
||||
num_style_feat,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
lr_mul=lr_mlp,
|
||||
activation="fused_lrelu",
|
||||
)
|
||||
)
|
||||
self.style_mlp = nn.Sequential(*style_mlp_layers)
|
||||
|
||||
channels = {
|
||||
"4": int(512 * narrow),
|
||||
"8": int(512 * narrow),
|
||||
"16": int(512 * narrow),
|
||||
"32": int(512 * narrow),
|
||||
"64": int(256 * channel_multiplier * narrow),
|
||||
"128": int(128 * channel_multiplier * narrow),
|
||||
"256": int(64 * channel_multiplier * narrow),
|
||||
"512": int(32 * channel_multiplier * narrow),
|
||||
"1024": int(16 * channel_multiplier * narrow),
|
||||
}
|
||||
self.channels = channels
|
||||
|
||||
self.constant_input = ConstantInput(channels["4"], size=4)
|
||||
self.style_conv1 = StyleConv(
|
||||
channels["4"],
|
||||
channels["4"],
|
||||
kernel_size=3,
|
||||
num_style_feat=num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode=None,
|
||||
resample_kernel=resample_kernel,
|
||||
)
|
||||
self.to_rgb1 = ToRGB(
|
||||
channels["4"],
|
||||
num_style_feat,
|
||||
upsample=False,
|
||||
resample_kernel=resample_kernel,
|
||||
)
|
||||
|
||||
self.log_size = int(math.log(out_size, 2))
|
||||
self.num_layers = (self.log_size - 2) * 2 + 1
|
||||
self.num_latent = self.log_size * 2 - 2
|
||||
|
||||
self.style_convs = nn.ModuleList()
|
||||
self.to_rgbs = nn.ModuleList()
|
||||
self.noises = nn.Module()
|
||||
|
||||
in_channels = channels["4"]
|
||||
# noise
|
||||
for layer_idx in range(self.num_layers):
|
||||
resolution = 2 ** ((layer_idx + 5) // 2)
|
||||
shape = [1, 1, resolution, resolution]
|
||||
self.noises.register_buffer(f"noise{layer_idx}", torch.randn(*shape))
|
||||
# style convs and to_rgbs
|
||||
for i in range(3, self.log_size + 1):
|
||||
out_channels = channels[f"{2**i}"]
|
||||
self.style_convs.append(
|
||||
StyleConv(
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size=3,
|
||||
num_style_feat=num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode="upsample",
|
||||
resample_kernel=resample_kernel,
|
||||
)
|
||||
)
|
||||
self.style_convs.append(
|
||||
StyleConv(
|
||||
out_channels,
|
||||
out_channels,
|
||||
kernel_size=3,
|
||||
num_style_feat=num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode=None,
|
||||
resample_kernel=resample_kernel,
|
||||
)
|
||||
)
|
||||
self.to_rgbs.append(
|
||||
ToRGB(
|
||||
out_channels,
|
||||
num_style_feat,
|
||||
upsample=True,
|
||||
resample_kernel=resample_kernel,
|
||||
)
|
||||
)
|
||||
in_channels = out_channels
|
||||
|
||||
def make_noise(self):
|
||||
"""Make noise for noise injection."""
|
||||
device = self.constant_input.weight.device
|
||||
noises = [torch.randn(1, 1, 4, 4, device=device)]
|
||||
|
||||
for i in range(3, self.log_size + 1):
|
||||
for _ in range(2):
|
||||
noises.append(torch.randn(1, 1, 2**i, 2**i, device=device))
|
||||
|
||||
return noises
|
||||
|
||||
def get_latent(self, x):
|
||||
return self.style_mlp(x)
|
||||
|
||||
def mean_latent(self, num_latent):
|
||||
latent_in = torch.randn(
|
||||
num_latent, self.num_style_feat, device=self.constant_input.weight.device
|
||||
)
|
||||
latent = self.style_mlp(latent_in).mean(0, keepdim=True)
|
||||
return latent
|
||||
|
||||
def forward(
|
||||
self,
|
||||
styles,
|
||||
input_is_latent=False,
|
||||
noise=None,
|
||||
randomize_noise=True,
|
||||
truncation=1,
|
||||
truncation_latent=None,
|
||||
inject_index=None,
|
||||
return_latents=False,
|
||||
):
|
||||
"""Forward function for StyleGAN2Generator.
|
||||
|
||||
Args:
|
||||
styles (list[Tensor]): Sample codes of styles.
|
||||
input_is_latent (bool): Whether input is latent style.
|
||||
Default: False.
|
||||
noise (Tensor | None): Input noise or None. Default: None.
|
||||
randomize_noise (bool): Randomize noise, used when 'noise' is
|
||||
False. Default: True.
|
||||
truncation (float): TODO. Default: 1.
|
||||
truncation_latent (Tensor | None): TODO. Default: None.
|
||||
inject_index (int | None): The injection index for mixing noise.
|
||||
Default: None.
|
||||
return_latents (bool): Whether to return style latents.
|
||||
Default: False.
|
||||
"""
|
||||
# style codes -> latents with Style MLP layer
|
||||
if not input_is_latent:
|
||||
styles = [self.style_mlp(s) for s in styles]
|
||||
# noises
|
||||
if noise is None:
|
||||
if randomize_noise:
|
||||
noise = [None] * self.num_layers # for each style conv layer
|
||||
else: # use the stored noise
|
||||
noise = [
|
||||
getattr(self.noises, f"noise{i}") for i in range(self.num_layers)
|
||||
]
|
||||
# style truncation
|
||||
if truncation < 1:
|
||||
style_truncation = []
|
||||
for style in styles:
|
||||
style_truncation.append(
|
||||
truncation_latent + truncation * (style - truncation_latent)
|
||||
)
|
||||
styles = style_truncation
|
||||
# get style latent with injection
|
||||
if len(styles) == 1:
|
||||
inject_index = self.num_latent
|
||||
|
||||
if styles[0].ndim < 3:
|
||||
# repeat latent code for all the layers
|
||||
latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
|
||||
else: # used for encoder with different latent code for each layer
|
||||
latent = styles[0]
|
||||
elif len(styles) == 2: # mixing noises
|
||||
if inject_index is None:
|
||||
inject_index = random.randint(1, self.num_latent - 1)
|
||||
latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
|
||||
latent2 = (
|
||||
styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)
|
||||
)
|
||||
latent = torch.cat([latent1, latent2], 1)
|
||||
|
||||
# main generation
|
||||
out = self.constant_input(latent.shape[0])
|
||||
out = self.style_conv1(out, latent[:, 0], noise=noise[0])
|
||||
skip = self.to_rgb1(out, latent[:, 1])
|
||||
|
||||
i = 1
|
||||
for conv1, conv2, noise1, noise2, to_rgb in zip(
|
||||
self.style_convs[::2],
|
||||
self.style_convs[1::2],
|
||||
noise[1::2],
|
||||
noise[2::2],
|
||||
self.to_rgbs,
|
||||
):
|
||||
out = conv1(out, latent[:, i], noise=noise1)
|
||||
out = conv2(out, latent[:, i + 1], noise=noise2)
|
||||
skip = to_rgb(out, latent[:, i + 2], skip)
|
||||
i += 2
|
||||
|
||||
image = skip
|
||||
|
||||
if return_latents:
|
||||
return image, latent
|
||||
else:
|
||||
return image, None
|
||||
|
||||
|
||||
class ScaledLeakyReLU(nn.Module):
|
||||
"""Scaled LeakyReLU.
|
||||
|
||||
Args:
|
||||
negative_slope (float): Negative slope. Default: 0.2.
|
||||
"""
|
||||
|
||||
def __init__(self, negative_slope=0.2):
|
||||
super(ScaledLeakyReLU, self).__init__()
|
||||
self.negative_slope = negative_slope
|
||||
|
||||
def forward(self, x):
|
||||
out = F.leaky_relu(x, negative_slope=self.negative_slope)
|
||||
return out * math.sqrt(2)
|
||||
|
||||
|
||||
class EqualConv2d(nn.Module):
|
||||
"""Equalized Linear as StyleGAN2.
|
||||
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
kernel_size (int): Size of the convolving kernel.
|
||||
stride (int): Stride of the convolution. Default: 1
|
||||
padding (int): Zero-padding added to both sides of the input.
|
||||
Default: 0.
|
||||
bias (bool): If ``True``, adds a learnable bias to the output.
|
||||
Default: ``True``.
|
||||
bias_init_val (float): Bias initialized value. Default: 0.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
stride=1,
|
||||
padding=0,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
):
|
||||
super(EqualConv2d, self).__init__()
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = out_channels
|
||||
self.kernel_size = kernel_size
|
||||
self.stride = stride
|
||||
self.padding = padding
|
||||
self.scale = 1 / math.sqrt(in_channels * kernel_size**2)
|
||||
|
||||
self.weight = nn.Parameter(
|
||||
torch.randn(out_channels, in_channels, kernel_size, kernel_size)
|
||||
)
|
||||
if bias:
|
||||
self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val))
|
||||
else:
|
||||
self.register_parameter("bias", None)
|
||||
|
||||
def forward(self, x):
|
||||
out = F.conv2d(
|
||||
x,
|
||||
self.weight * self.scale,
|
||||
bias=self.bias,
|
||||
stride=self.stride,
|
||||
padding=self.padding,
|
||||
)
|
||||
|
||||
return out
|
||||
|
||||
def __repr__(self):
|
||||
return (
|
||||
f"{self.__class__.__name__}(in_channels={self.in_channels}, "
|
||||
f"out_channels={self.out_channels}, "
|
||||
f"kernel_size={self.kernel_size},"
|
||||
f" stride={self.stride}, padding={self.padding}, "
|
||||
f"bias={self.bias is not None})"
|
||||
)
|
||||
|
||||
|
||||
class ConvLayer(nn.Sequential):
|
||||
"""Conv Layer used in StyleGAN2 Discriminator.
|
||||
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
kernel_size (int): Kernel size.
|
||||
downsample (bool): Whether downsample by a factor of 2.
|
||||
Default: False.
|
||||
resample_kernel (list[int]): A list indicating the 1D resample
|
||||
kernel magnitude. A cross production will be applied to
|
||||
extent 1D resample kernel to 2D resample kernel.
|
||||
Default: (1, 3, 3, 1).
|
||||
bias (bool): Whether with bias. Default: True.
|
||||
activate (bool): Whether use activateion. Default: True.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
downsample=False,
|
||||
resample_kernel=(1, 3, 3, 1),
|
||||
bias=True,
|
||||
activate=True,
|
||||
):
|
||||
layers = []
|
||||
# downsample
|
||||
if downsample:
|
||||
layers.append(
|
||||
UpFirDnSmooth(
|
||||
resample_kernel,
|
||||
upsample_factor=1,
|
||||
downsample_factor=2,
|
||||
kernel_size=kernel_size,
|
||||
)
|
||||
)
|
||||
stride = 2
|
||||
self.padding = 0
|
||||
else:
|
||||
stride = 1
|
||||
self.padding = kernel_size // 2
|
||||
# conv
|
||||
layers.append(
|
||||
EqualConv2d(
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
stride=stride,
|
||||
padding=self.padding,
|
||||
bias=bias and not activate,
|
||||
)
|
||||
)
|
||||
# activation
|
||||
if activate:
|
||||
if bias:
|
||||
layers.append(FusedLeakyReLU(out_channels))
|
||||
else:
|
||||
layers.append(ScaledLeakyReLU(0.2))
|
||||
|
||||
super(ConvLayer, self).__init__(*layers)
|
||||
|
||||
|
||||
class ResBlock(nn.Module):
|
||||
"""Residual block used in StyleGAN2 Discriminator.
|
||||
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
resample_kernel (list[int]): A list indicating the 1D resample
|
||||
kernel magnitude. A cross production will be applied to
|
||||
extent 1D resample kernel to 2D resample kernel.
|
||||
Default: (1, 3, 3, 1).
|
||||
"""
|
||||
|
||||
def __init__(self, in_channels, out_channels, resample_kernel=(1, 3, 3, 1)):
|
||||
super(ResBlock, self).__init__()
|
||||
|
||||
self.conv1 = ConvLayer(in_channels, in_channels, 3, bias=True, activate=True)
|
||||
self.conv2 = ConvLayer(
|
||||
in_channels,
|
||||
out_channels,
|
||||
3,
|
||||
downsample=True,
|
||||
resample_kernel=resample_kernel,
|
||||
bias=True,
|
||||
activate=True,
|
||||
)
|
||||
self.skip = ConvLayer(
|
||||
in_channels,
|
||||
out_channels,
|
||||
1,
|
||||
downsample=True,
|
||||
resample_kernel=resample_kernel,
|
||||
bias=False,
|
||||
activate=False,
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
out = self.conv1(x)
|
||||
out = self.conv2(out)
|
||||
skip = self.skip(x)
|
||||
out = (out + skip) / math.sqrt(2)
|
||||
return out
|
@@ -0,0 +1,709 @@
|
||||
# pylint: skip-file
|
||||
# type: ignore
|
||||
import math
|
||||
import random
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
from torch.nn import functional as F
|
||||
|
||||
from .fused_act import FusedLeakyReLU, fused_leaky_relu
|
||||
|
||||
|
||||
class NormStyleCode(nn.Module):
|
||||
def forward(self, x):
|
||||
"""Normalize the style codes.
|
||||
Args:
|
||||
x (Tensor): Style codes with shape (b, c).
|
||||
Returns:
|
||||
Tensor: Normalized tensor.
|
||||
"""
|
||||
return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8)
|
||||
|
||||
|
||||
class EqualLinear(nn.Module):
|
||||
"""Equalized Linear as StyleGAN2.
|
||||
Args:
|
||||
in_channels (int): Size of each sample.
|
||||
out_channels (int): Size of each output sample.
|
||||
bias (bool): If set to ``False``, the layer will not learn an additive
|
||||
bias. Default: ``True``.
|
||||
bias_init_val (float): Bias initialized value. Default: 0.
|
||||
lr_mul (float): Learning rate multiplier. Default: 1.
|
||||
activation (None | str): The activation after ``linear`` operation.
|
||||
Supported: 'fused_lrelu', None. Default: None.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
lr_mul=1,
|
||||
activation=None,
|
||||
):
|
||||
super(EqualLinear, self).__init__()
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = out_channels
|
||||
self.lr_mul = lr_mul
|
||||
self.activation = activation
|
||||
if self.activation not in ["fused_lrelu", None]:
|
||||
raise ValueError(
|
||||
f"Wrong activation value in EqualLinear: {activation}"
|
||||
"Supported ones are: ['fused_lrelu', None]."
|
||||
)
|
||||
self.scale = (1 / math.sqrt(in_channels)) * lr_mul
|
||||
|
||||
self.weight = nn.Parameter(torch.randn(out_channels, in_channels).div_(lr_mul))
|
||||
if bias:
|
||||
self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val))
|
||||
else:
|
||||
self.register_parameter("bias", None)
|
||||
|
||||
def forward(self, x):
|
||||
if self.bias is None:
|
||||
bias = None
|
||||
else:
|
||||
bias = self.bias * self.lr_mul
|
||||
if self.activation == "fused_lrelu":
|
||||
out = F.linear(x, self.weight * self.scale)
|
||||
out = fused_leaky_relu(out, bias)
|
||||
else:
|
||||
out = F.linear(x, self.weight * self.scale, bias=bias)
|
||||
return out
|
||||
|
||||
def __repr__(self):
|
||||
return (
|
||||
f"{self.__class__.__name__}(in_channels={self.in_channels}, "
|
||||
f"out_channels={self.out_channels}, bias={self.bias is not None})"
|
||||
)
|
||||
|
||||
|
||||
class ModulatedConv2d(nn.Module):
|
||||
"""Modulated Conv2d used in StyleGAN2.
|
||||
There is no bias in ModulatedConv2d.
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
kernel_size (int): Size of the convolving kernel.
|
||||
num_style_feat (int): Channel number of style features.
|
||||
demodulate (bool): Whether to demodulate in the conv layer.
|
||||
Default: True.
|
||||
sample_mode (str | None): Indicating 'upsample', 'downsample' or None.
|
||||
Default: None.
|
||||
eps (float): A value added to the denominator for numerical stability.
|
||||
Default: 1e-8.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode=None,
|
||||
eps=1e-8,
|
||||
interpolation_mode="bilinear",
|
||||
):
|
||||
super(ModulatedConv2d, self).__init__()
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = out_channels
|
||||
self.kernel_size = kernel_size
|
||||
self.demodulate = demodulate
|
||||
self.sample_mode = sample_mode
|
||||
self.eps = eps
|
||||
self.interpolation_mode = interpolation_mode
|
||||
if self.interpolation_mode == "nearest":
|
||||
self.align_corners = None
|
||||
else:
|
||||
self.align_corners = False
|
||||
|
||||
self.scale = 1 / math.sqrt(in_channels * kernel_size**2)
|
||||
# modulation inside each modulated conv
|
||||
self.modulation = EqualLinear(
|
||||
num_style_feat,
|
||||
in_channels,
|
||||
bias=True,
|
||||
bias_init_val=1,
|
||||
lr_mul=1,
|
||||
activation=None,
|
||||
)
|
||||
|
||||
self.weight = nn.Parameter(
|
||||
torch.randn(1, out_channels, in_channels, kernel_size, kernel_size)
|
||||
)
|
||||
self.padding = kernel_size // 2
|
||||
|
||||
def forward(self, x, style):
|
||||
"""Forward function.
|
||||
Args:
|
||||
x (Tensor): Tensor with shape (b, c, h, w).
|
||||
style (Tensor): Tensor with shape (b, num_style_feat).
|
||||
Returns:
|
||||
Tensor: Modulated tensor after convolution.
|
||||
"""
|
||||
b, c, h, w = x.shape # c = c_in
|
||||
# weight modulation
|
||||
style = self.modulation(style).view(b, 1, c, 1, 1)
|
||||
# self.weight: (1, c_out, c_in, k, k); style: (b, 1, c, 1, 1)
|
||||
weight = self.scale * self.weight * style # (b, c_out, c_in, k, k)
|
||||
|
||||
if self.demodulate:
|
||||
demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + self.eps)
|
||||
weight = weight * demod.view(b, self.out_channels, 1, 1, 1)
|
||||
|
||||
weight = weight.view(
|
||||
b * self.out_channels, c, self.kernel_size, self.kernel_size
|
||||
)
|
||||
|
||||
if self.sample_mode == "upsample":
|
||||
x = F.interpolate(
|
||||
x,
|
||||
scale_factor=2,
|
||||
mode=self.interpolation_mode,
|
||||
align_corners=self.align_corners,
|
||||
)
|
||||
elif self.sample_mode == "downsample":
|
||||
x = F.interpolate(
|
||||
x,
|
||||
scale_factor=0.5,
|
||||
mode=self.interpolation_mode,
|
||||
align_corners=self.align_corners,
|
||||
)
|
||||
|
||||
b, c, h, w = x.shape
|
||||
x = x.view(1, b * c, h, w)
|
||||
# weight: (b*c_out, c_in, k, k), groups=b
|
||||
out = F.conv2d(x, weight, padding=self.padding, groups=b)
|
||||
out = out.view(b, self.out_channels, *out.shape[2:4])
|
||||
|
||||
return out
|
||||
|
||||
def __repr__(self):
|
||||
return (
|
||||
f"{self.__class__.__name__}(in_channels={self.in_channels}, "
|
||||
f"out_channels={self.out_channels}, "
|
||||
f"kernel_size={self.kernel_size}, "
|
||||
f"demodulate={self.demodulate}, sample_mode={self.sample_mode})"
|
||||
)
|
||||
|
||||
|
||||
class StyleConv(nn.Module):
|
||||
"""Style conv.
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
kernel_size (int): Size of the convolving kernel.
|
||||
num_style_feat (int): Channel number of style features.
|
||||
demodulate (bool): Whether demodulate in the conv layer. Default: True.
|
||||
sample_mode (str | None): Indicating 'upsample', 'downsample' or None.
|
||||
Default: None.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode=None,
|
||||
interpolation_mode="bilinear",
|
||||
):
|
||||
super(StyleConv, self).__init__()
|
||||
self.modulated_conv = ModulatedConv2d(
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
num_style_feat,
|
||||
demodulate=demodulate,
|
||||
sample_mode=sample_mode,
|
||||
interpolation_mode=interpolation_mode,
|
||||
)
|
||||
self.weight = nn.Parameter(torch.zeros(1)) # for noise injection
|
||||
self.activate = FusedLeakyReLU(out_channels)
|
||||
|
||||
def forward(self, x, style, noise=None):
|
||||
# modulate
|
||||
out = self.modulated_conv(x, style)
|
||||
# noise injection
|
||||
if noise is None:
|
||||
b, _, h, w = out.shape
|
||||
noise = out.new_empty(b, 1, h, w).normal_()
|
||||
out = out + self.weight * noise
|
||||
# activation (with bias)
|
||||
out = self.activate(out)
|
||||
return out
|
||||
|
||||
|
||||
class ToRGB(nn.Module):
|
||||
"""To RGB from features.
|
||||
Args:
|
||||
in_channels (int): Channel number of input.
|
||||
num_style_feat (int): Channel number of style features.
|
||||
upsample (bool): Whether to upsample. Default: True.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, in_channels, num_style_feat, upsample=True, interpolation_mode="bilinear"
|
||||
):
|
||||
super(ToRGB, self).__init__()
|
||||
self.upsample = upsample
|
||||
self.interpolation_mode = interpolation_mode
|
||||
if self.interpolation_mode == "nearest":
|
||||
self.align_corners = None
|
||||
else:
|
||||
self.align_corners = False
|
||||
self.modulated_conv = ModulatedConv2d(
|
||||
in_channels,
|
||||
3,
|
||||
kernel_size=1,
|
||||
num_style_feat=num_style_feat,
|
||||
demodulate=False,
|
||||
sample_mode=None,
|
||||
interpolation_mode=interpolation_mode,
|
||||
)
|
||||
self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
|
||||
|
||||
def forward(self, x, style, skip=None):
|
||||
"""Forward function.
|
||||
Args:
|
||||
x (Tensor): Feature tensor with shape (b, c, h, w).
|
||||
style (Tensor): Tensor with shape (b, num_style_feat).
|
||||
skip (Tensor): Base/skip tensor. Default: None.
|
||||
Returns:
|
||||
Tensor: RGB images.
|
||||
"""
|
||||
out = self.modulated_conv(x, style)
|
||||
out = out + self.bias
|
||||
if skip is not None:
|
||||
if self.upsample:
|
||||
skip = F.interpolate(
|
||||
skip,
|
||||
scale_factor=2,
|
||||
mode=self.interpolation_mode,
|
||||
align_corners=self.align_corners,
|
||||
)
|
||||
out = out + skip
|
||||
return out
|
||||
|
||||
|
||||
class ConstantInput(nn.Module):
|
||||
"""Constant input.
|
||||
Args:
|
||||
num_channel (int): Channel number of constant input.
|
||||
size (int): Spatial size of constant input.
|
||||
"""
|
||||
|
||||
def __init__(self, num_channel, size):
|
||||
super(ConstantInput, self).__init__()
|
||||
self.weight = nn.Parameter(torch.randn(1, num_channel, size, size))
|
||||
|
||||
def forward(self, batch):
|
||||
out = self.weight.repeat(batch, 1, 1, 1)
|
||||
return out
|
||||
|
||||
|
||||
class StyleGAN2GeneratorBilinear(nn.Module):
|
||||
"""StyleGAN2 Generator.
|
||||
Args:
|
||||
out_size (int): The spatial size of outputs.
|
||||
num_style_feat (int): Channel number of style features. Default: 512.
|
||||
num_mlp (int): Layer number of MLP style layers. Default: 8.
|
||||
channel_multiplier (int): Channel multiplier for large networks of
|
||||
StyleGAN2. Default: 2.
|
||||
lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01.
|
||||
narrow (float): Narrow ratio for channels. Default: 1.0.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
out_size,
|
||||
num_style_feat=512,
|
||||
num_mlp=8,
|
||||
channel_multiplier=2,
|
||||
lr_mlp=0.01,
|
||||
narrow=1,
|
||||
interpolation_mode="bilinear",
|
||||
):
|
||||
super(StyleGAN2GeneratorBilinear, self).__init__()
|
||||
# Style MLP layers
|
||||
self.num_style_feat = num_style_feat
|
||||
style_mlp_layers = [NormStyleCode()]
|
||||
for i in range(num_mlp):
|
||||
style_mlp_layers.append(
|
||||
EqualLinear(
|
||||
num_style_feat,
|
||||
num_style_feat,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
lr_mul=lr_mlp,
|
||||
activation="fused_lrelu",
|
||||
)
|
||||
)
|
||||
self.style_mlp = nn.Sequential(*style_mlp_layers)
|
||||
|
||||
channels = {
|
||||
"4": int(512 * narrow),
|
||||
"8": int(512 * narrow),
|
||||
"16": int(512 * narrow),
|
||||
"32": int(512 * narrow),
|
||||
"64": int(256 * channel_multiplier * narrow),
|
||||
"128": int(128 * channel_multiplier * narrow),
|
||||
"256": int(64 * channel_multiplier * narrow),
|
||||
"512": int(32 * channel_multiplier * narrow),
|
||||
"1024": int(16 * channel_multiplier * narrow),
|
||||
}
|
||||
self.channels = channels
|
||||
|
||||
self.constant_input = ConstantInput(channels["4"], size=4)
|
||||
self.style_conv1 = StyleConv(
|
||||
channels["4"],
|
||||
channels["4"],
|
||||
kernel_size=3,
|
||||
num_style_feat=num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode=None,
|
||||
interpolation_mode=interpolation_mode,
|
||||
)
|
||||
self.to_rgb1 = ToRGB(
|
||||
channels["4"],
|
||||
num_style_feat,
|
||||
upsample=False,
|
||||
interpolation_mode=interpolation_mode,
|
||||
)
|
||||
|
||||
self.log_size = int(math.log(out_size, 2))
|
||||
self.num_layers = (self.log_size - 2) * 2 + 1
|
||||
self.num_latent = self.log_size * 2 - 2
|
||||
|
||||
self.style_convs = nn.ModuleList()
|
||||
self.to_rgbs = nn.ModuleList()
|
||||
self.noises = nn.Module()
|
||||
|
||||
in_channels = channels["4"]
|
||||
# noise
|
||||
for layer_idx in range(self.num_layers):
|
||||
resolution = 2 ** ((layer_idx + 5) // 2)
|
||||
shape = [1, 1, resolution, resolution]
|
||||
self.noises.register_buffer(f"noise{layer_idx}", torch.randn(*shape))
|
||||
# style convs and to_rgbs
|
||||
for i in range(3, self.log_size + 1):
|
||||
out_channels = channels[f"{2**i}"]
|
||||
self.style_convs.append(
|
||||
StyleConv(
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size=3,
|
||||
num_style_feat=num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode="upsample",
|
||||
interpolation_mode=interpolation_mode,
|
||||
)
|
||||
)
|
||||
self.style_convs.append(
|
||||
StyleConv(
|
||||
out_channels,
|
||||
out_channels,
|
||||
kernel_size=3,
|
||||
num_style_feat=num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode=None,
|
||||
interpolation_mode=interpolation_mode,
|
||||
)
|
||||
)
|
||||
self.to_rgbs.append(
|
||||
ToRGB(
|
||||
out_channels,
|
||||
num_style_feat,
|
||||
upsample=True,
|
||||
interpolation_mode=interpolation_mode,
|
||||
)
|
||||
)
|
||||
in_channels = out_channels
|
||||
|
||||
def make_noise(self):
|
||||
"""Make noise for noise injection."""
|
||||
device = self.constant_input.weight.device
|
||||
noises = [torch.randn(1, 1, 4, 4, device=device)]
|
||||
|
||||
for i in range(3, self.log_size + 1):
|
||||
for _ in range(2):
|
||||
noises.append(torch.randn(1, 1, 2**i, 2**i, device=device))
|
||||
|
||||
return noises
|
||||
|
||||
def get_latent(self, x):
|
||||
return self.style_mlp(x)
|
||||
|
||||
def mean_latent(self, num_latent):
|
||||
latent_in = torch.randn(
|
||||
num_latent, self.num_style_feat, device=self.constant_input.weight.device
|
||||
)
|
||||
latent = self.style_mlp(latent_in).mean(0, keepdim=True)
|
||||
return latent
|
||||
|
||||
def forward(
|
||||
self,
|
||||
styles,
|
||||
input_is_latent=False,
|
||||
noise=None,
|
||||
randomize_noise=True,
|
||||
truncation=1,
|
||||
truncation_latent=None,
|
||||
inject_index=None,
|
||||
return_latents=False,
|
||||
):
|
||||
"""Forward function for StyleGAN2Generator.
|
||||
Args:
|
||||
styles (list[Tensor]): Sample codes of styles.
|
||||
input_is_latent (bool): Whether input is latent style.
|
||||
Default: False.
|
||||
noise (Tensor | None): Input noise or None. Default: None.
|
||||
randomize_noise (bool): Randomize noise, used when 'noise' is
|
||||
False. Default: True.
|
||||
truncation (float): TODO. Default: 1.
|
||||
truncation_latent (Tensor | None): TODO. Default: None.
|
||||
inject_index (int | None): The injection index for mixing noise.
|
||||
Default: None.
|
||||
return_latents (bool): Whether to return style latents.
|
||||
Default: False.
|
||||
"""
|
||||
# style codes -> latents with Style MLP layer
|
||||
if not input_is_latent:
|
||||
styles = [self.style_mlp(s) for s in styles]
|
||||
# noises
|
||||
if noise is None:
|
||||
if randomize_noise:
|
||||
noise = [None] * self.num_layers # for each style conv layer
|
||||
else: # use the stored noise
|
||||
noise = [
|
||||
getattr(self.noises, f"noise{i}") for i in range(self.num_layers)
|
||||
]
|
||||
# style truncation
|
||||
if truncation < 1:
|
||||
style_truncation = []
|
||||
for style in styles:
|
||||
style_truncation.append(
|
||||
truncation_latent + truncation * (style - truncation_latent)
|
||||
)
|
||||
styles = style_truncation
|
||||
# get style latent with injection
|
||||
if len(styles) == 1:
|
||||
inject_index = self.num_latent
|
||||
|
||||
if styles[0].ndim < 3:
|
||||
# repeat latent code for all the layers
|
||||
latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
|
||||
else: # used for encoder with different latent code for each layer
|
||||
latent = styles[0]
|
||||
elif len(styles) == 2: # mixing noises
|
||||
if inject_index is None:
|
||||
inject_index = random.randint(1, self.num_latent - 1)
|
||||
latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
|
||||
latent2 = (
|
||||
styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)
|
||||
)
|
||||
latent = torch.cat([latent1, latent2], 1)
|
||||
|
||||
# main generation
|
||||
out = self.constant_input(latent.shape[0])
|
||||
out = self.style_conv1(out, latent[:, 0], noise=noise[0])
|
||||
skip = self.to_rgb1(out, latent[:, 1])
|
||||
|
||||
i = 1
|
||||
for conv1, conv2, noise1, noise2, to_rgb in zip(
|
||||
self.style_convs[::2],
|
||||
self.style_convs[1::2],
|
||||
noise[1::2],
|
||||
noise[2::2],
|
||||
self.to_rgbs,
|
||||
):
|
||||
out = conv1(out, latent[:, i], noise=noise1)
|
||||
out = conv2(out, latent[:, i + 1], noise=noise2)
|
||||
skip = to_rgb(out, latent[:, i + 2], skip)
|
||||
i += 2
|
||||
|
||||
image = skip
|
||||
|
||||
if return_latents:
|
||||
return image, latent
|
||||
else:
|
||||
return image, None
|
||||
|
||||
|
||||
class ScaledLeakyReLU(nn.Module):
|
||||
"""Scaled LeakyReLU.
|
||||
Args:
|
||||
negative_slope (float): Negative slope. Default: 0.2.
|
||||
"""
|
||||
|
||||
def __init__(self, negative_slope=0.2):
|
||||
super(ScaledLeakyReLU, self).__init__()
|
||||
self.negative_slope = negative_slope
|
||||
|
||||
def forward(self, x):
|
||||
out = F.leaky_relu(x, negative_slope=self.negative_slope)
|
||||
return out * math.sqrt(2)
|
||||
|
||||
|
||||
class EqualConv2d(nn.Module):
|
||||
"""Equalized Linear as StyleGAN2.
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
kernel_size (int): Size of the convolving kernel.
|
||||
stride (int): Stride of the convolution. Default: 1
|
||||
padding (int): Zero-padding added to both sides of the input.
|
||||
Default: 0.
|
||||
bias (bool): If ``True``, adds a learnable bias to the output.
|
||||
Default: ``True``.
|
||||
bias_init_val (float): Bias initialized value. Default: 0.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
stride=1,
|
||||
padding=0,
|
||||
bias=True,
|
||||
bias_init_val=0,
|
||||
):
|
||||
super(EqualConv2d, self).__init__()
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = out_channels
|
||||
self.kernel_size = kernel_size
|
||||
self.stride = stride
|
||||
self.padding = padding
|
||||
self.scale = 1 / math.sqrt(in_channels * kernel_size**2)
|
||||
|
||||
self.weight = nn.Parameter(
|
||||
torch.randn(out_channels, in_channels, kernel_size, kernel_size)
|
||||
)
|
||||
if bias:
|
||||
self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val))
|
||||
else:
|
||||
self.register_parameter("bias", None)
|
||||
|
||||
def forward(self, x):
|
||||
out = F.conv2d(
|
||||
x,
|
||||
self.weight * self.scale,
|
||||
bias=self.bias,
|
||||
stride=self.stride,
|
||||
padding=self.padding,
|
||||
)
|
||||
|
||||
return out
|
||||
|
||||
def __repr__(self):
|
||||
return (
|
||||
f"{self.__class__.__name__}(in_channels={self.in_channels}, "
|
||||
f"out_channels={self.out_channels}, "
|
||||
f"kernel_size={self.kernel_size},"
|
||||
f" stride={self.stride}, padding={self.padding}, "
|
||||
f"bias={self.bias is not None})"
|
||||
)
|
||||
|
||||
|
||||
class ConvLayer(nn.Sequential):
|
||||
"""Conv Layer used in StyleGAN2 Discriminator.
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
kernel_size (int): Kernel size.
|
||||
downsample (bool): Whether downsample by a factor of 2.
|
||||
Default: False.
|
||||
bias (bool): Whether with bias. Default: True.
|
||||
activate (bool): Whether use activateion. Default: True.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
downsample=False,
|
||||
bias=True,
|
||||
activate=True,
|
||||
interpolation_mode="bilinear",
|
||||
):
|
||||
layers = []
|
||||
self.interpolation_mode = interpolation_mode
|
||||
# downsample
|
||||
if downsample:
|
||||
if self.interpolation_mode == "nearest":
|
||||
self.align_corners = None
|
||||
else:
|
||||
self.align_corners = False
|
||||
|
||||
layers.append(
|
||||
torch.nn.Upsample(
|
||||
scale_factor=0.5,
|
||||
mode=interpolation_mode,
|
||||
align_corners=self.align_corners,
|
||||
)
|
||||
)
|
||||
stride = 1
|
||||
self.padding = kernel_size // 2
|
||||
# conv
|
||||
layers.append(
|
||||
EqualConv2d(
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
stride=stride,
|
||||
padding=self.padding,
|
||||
bias=bias and not activate,
|
||||
)
|
||||
)
|
||||
# activation
|
||||
if activate:
|
||||
if bias:
|
||||
layers.append(FusedLeakyReLU(out_channels))
|
||||
else:
|
||||
layers.append(ScaledLeakyReLU(0.2))
|
||||
|
||||
super(ConvLayer, self).__init__(*layers)
|
||||
|
||||
|
||||
class ResBlock(nn.Module):
|
||||
"""Residual block used in StyleGAN2 Discriminator.
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
"""
|
||||
|
||||
def __init__(self, in_channels, out_channels, interpolation_mode="bilinear"):
|
||||
super(ResBlock, self).__init__()
|
||||
|
||||
self.conv1 = ConvLayer(in_channels, in_channels, 3, bias=True, activate=True)
|
||||
self.conv2 = ConvLayer(
|
||||
in_channels,
|
||||
out_channels,
|
||||
3,
|
||||
downsample=True,
|
||||
interpolation_mode=interpolation_mode,
|
||||
bias=True,
|
||||
activate=True,
|
||||
)
|
||||
self.skip = ConvLayer(
|
||||
in_channels,
|
||||
out_channels,
|
||||
1,
|
||||
downsample=True,
|
||||
interpolation_mode=interpolation_mode,
|
||||
bias=False,
|
||||
activate=False,
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
out = self.conv1(x)
|
||||
out = self.conv2(out)
|
||||
skip = self.skip(x)
|
||||
out = (out + skip) / math.sqrt(2)
|
||||
return out
|
@@ -0,0 +1,453 @@
|
||||
# pylint: skip-file
|
||||
# type: ignore
|
||||
import math
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
from torch.nn import functional as F
|
||||
from torch.nn import init
|
||||
from torch.nn.modules.batchnorm import _BatchNorm
|
||||
|
||||
|
||||
@torch.no_grad()
|
||||
def default_init_weights(module_list, scale=1, bias_fill=0, **kwargs):
|
||||
"""Initialize network weights.
|
||||
Args:
|
||||
module_list (list[nn.Module] | nn.Module): Modules to be initialized.
|
||||
scale (float): Scale initialized weights, especially for residual
|
||||
blocks. Default: 1.
|
||||
bias_fill (float): The value to fill bias. Default: 0
|
||||
kwargs (dict): Other arguments for initialization function.
|
||||
"""
|
||||
if not isinstance(module_list, list):
|
||||
module_list = [module_list]
|
||||
for module in module_list:
|
||||
for m in module.modules():
|
||||
if isinstance(m, nn.Conv2d):
|
||||
init.kaiming_normal_(m.weight, **kwargs)
|
||||
m.weight.data *= scale
|
||||
if m.bias is not None:
|
||||
m.bias.data.fill_(bias_fill)
|
||||
elif isinstance(m, nn.Linear):
|
||||
init.kaiming_normal_(m.weight, **kwargs)
|
||||
m.weight.data *= scale
|
||||
if m.bias is not None:
|
||||
m.bias.data.fill_(bias_fill)
|
||||
elif isinstance(m, _BatchNorm):
|
||||
init.constant_(m.weight, 1)
|
||||
if m.bias is not None:
|
||||
m.bias.data.fill_(bias_fill)
|
||||
|
||||
|
||||
class NormStyleCode(nn.Module):
|
||||
def forward(self, x):
|
||||
"""Normalize the style codes.
|
||||
Args:
|
||||
x (Tensor): Style codes with shape (b, c).
|
||||
Returns:
|
||||
Tensor: Normalized tensor.
|
||||
"""
|
||||
return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8)
|
||||
|
||||
|
||||
class ModulatedConv2d(nn.Module):
|
||||
"""Modulated Conv2d used in StyleGAN2.
|
||||
There is no bias in ModulatedConv2d.
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
kernel_size (int): Size of the convolving kernel.
|
||||
num_style_feat (int): Channel number of style features.
|
||||
demodulate (bool): Whether to demodulate in the conv layer. Default: True.
|
||||
sample_mode (str | None): Indicating 'upsample', 'downsample' or None. Default: None.
|
||||
eps (float): A value added to the denominator for numerical stability. Default: 1e-8.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode=None,
|
||||
eps=1e-8,
|
||||
):
|
||||
super(ModulatedConv2d, self).__init__()
|
||||
self.in_channels = in_channels
|
||||
self.out_channels = out_channels
|
||||
self.kernel_size = kernel_size
|
||||
self.demodulate = demodulate
|
||||
self.sample_mode = sample_mode
|
||||
self.eps = eps
|
||||
|
||||
# modulation inside each modulated conv
|
||||
self.modulation = nn.Linear(num_style_feat, in_channels, bias=True)
|
||||
# initialization
|
||||
default_init_weights(
|
||||
self.modulation,
|
||||
scale=1,
|
||||
bias_fill=1,
|
||||
a=0,
|
||||
mode="fan_in",
|
||||
nonlinearity="linear",
|
||||
)
|
||||
|
||||
self.weight = nn.Parameter(
|
||||
torch.randn(1, out_channels, in_channels, kernel_size, kernel_size)
|
||||
/ math.sqrt(in_channels * kernel_size**2)
|
||||
)
|
||||
self.padding = kernel_size // 2
|
||||
|
||||
def forward(self, x, style):
|
||||
"""Forward function.
|
||||
Args:
|
||||
x (Tensor): Tensor with shape (b, c, h, w).
|
||||
style (Tensor): Tensor with shape (b, num_style_feat).
|
||||
Returns:
|
||||
Tensor: Modulated tensor after convolution.
|
||||
"""
|
||||
b, c, h, w = x.shape # c = c_in
|
||||
# weight modulation
|
||||
style = self.modulation(style).view(b, 1, c, 1, 1)
|
||||
# self.weight: (1, c_out, c_in, k, k); style: (b, 1, c, 1, 1)
|
||||
weight = self.weight * style # (b, c_out, c_in, k, k)
|
||||
|
||||
if self.demodulate:
|
||||
demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + self.eps)
|
||||
weight = weight * demod.view(b, self.out_channels, 1, 1, 1)
|
||||
|
||||
weight = weight.view(
|
||||
b * self.out_channels, c, self.kernel_size, self.kernel_size
|
||||
)
|
||||
|
||||
# upsample or downsample if necessary
|
||||
if self.sample_mode == "upsample":
|
||||
x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=False)
|
||||
elif self.sample_mode == "downsample":
|
||||
x = F.interpolate(x, scale_factor=0.5, mode="bilinear", align_corners=False)
|
||||
|
||||
b, c, h, w = x.shape
|
||||
x = x.view(1, b * c, h, w)
|
||||
# weight: (b*c_out, c_in, k, k), groups=b
|
||||
out = F.conv2d(x, weight, padding=self.padding, groups=b)
|
||||
out = out.view(b, self.out_channels, *out.shape[2:4])
|
||||
|
||||
return out
|
||||
|
||||
def __repr__(self):
|
||||
return (
|
||||
f"{self.__class__.__name__}(in_channels={self.in_channels}, out_channels={self.out_channels}, "
|
||||
f"kernel_size={self.kernel_size}, demodulate={self.demodulate}, sample_mode={self.sample_mode})"
|
||||
)
|
||||
|
||||
|
||||
class StyleConv(nn.Module):
|
||||
"""Style conv used in StyleGAN2.
|
||||
Args:
|
||||
in_channels (int): Channel number of the input.
|
||||
out_channels (int): Channel number of the output.
|
||||
kernel_size (int): Size of the convolving kernel.
|
||||
num_style_feat (int): Channel number of style features.
|
||||
demodulate (bool): Whether demodulate in the conv layer. Default: True.
|
||||
sample_mode (str | None): Indicating 'upsample', 'downsample' or None. Default: None.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode=None,
|
||||
):
|
||||
super(StyleConv, self).__init__()
|
||||
self.modulated_conv = ModulatedConv2d(
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size,
|
||||
num_style_feat,
|
||||
demodulate=demodulate,
|
||||
sample_mode=sample_mode,
|
||||
)
|
||||
self.weight = nn.Parameter(torch.zeros(1)) # for noise injection
|
||||
self.bias = nn.Parameter(torch.zeros(1, out_channels, 1, 1))
|
||||
self.activate = nn.LeakyReLU(negative_slope=0.2, inplace=True)
|
||||
|
||||
def forward(self, x, style, noise=None):
|
||||
# modulate
|
||||
out = self.modulated_conv(x, style) * 2**0.5 # for conversion
|
||||
# noise injection
|
||||
if noise is None:
|
||||
b, _, h, w = out.shape
|
||||
noise = out.new_empty(b, 1, h, w).normal_()
|
||||
out = out + self.weight * noise
|
||||
# add bias
|
||||
out = out + self.bias
|
||||
# activation
|
||||
out = self.activate(out)
|
||||
return out
|
||||
|
||||
|
||||
class ToRGB(nn.Module):
|
||||
"""To RGB (image space) from features.
|
||||
Args:
|
||||
in_channels (int): Channel number of input.
|
||||
num_style_feat (int): Channel number of style features.
|
||||
upsample (bool): Whether to upsample. Default: True.
|
||||
"""
|
||||
|
||||
def __init__(self, in_channels, num_style_feat, upsample=True):
|
||||
super(ToRGB, self).__init__()
|
||||
self.upsample = upsample
|
||||
self.modulated_conv = ModulatedConv2d(
|
||||
in_channels,
|
||||
3,
|
||||
kernel_size=1,
|
||||
num_style_feat=num_style_feat,
|
||||
demodulate=False,
|
||||
sample_mode=None,
|
||||
)
|
||||
self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1))
|
||||
|
||||
def forward(self, x, style, skip=None):
|
||||
"""Forward function.
|
||||
Args:
|
||||
x (Tensor): Feature tensor with shape (b, c, h, w).
|
||||
style (Tensor): Tensor with shape (b, num_style_feat).
|
||||
skip (Tensor): Base/skip tensor. Default: None.
|
||||
Returns:
|
||||
Tensor: RGB images.
|
||||
"""
|
||||
out = self.modulated_conv(x, style)
|
||||
out = out + self.bias
|
||||
if skip is not None:
|
||||
if self.upsample:
|
||||
skip = F.interpolate(
|
||||
skip, scale_factor=2, mode="bilinear", align_corners=False
|
||||
)
|
||||
out = out + skip
|
||||
return out
|
||||
|
||||
|
||||
class ConstantInput(nn.Module):
|
||||
"""Constant input.
|
||||
Args:
|
||||
num_channel (int): Channel number of constant input.
|
||||
size (int): Spatial size of constant input.
|
||||
"""
|
||||
|
||||
def __init__(self, num_channel, size):
|
||||
super(ConstantInput, self).__init__()
|
||||
self.weight = nn.Parameter(torch.randn(1, num_channel, size, size))
|
||||
|
||||
def forward(self, batch):
|
||||
out = self.weight.repeat(batch, 1, 1, 1)
|
||||
return out
|
||||
|
||||
|
||||
class StyleGAN2GeneratorClean(nn.Module):
|
||||
"""Clean version of StyleGAN2 Generator.
|
||||
Args:
|
||||
out_size (int): The spatial size of outputs.
|
||||
num_style_feat (int): Channel number of style features. Default: 512.
|
||||
num_mlp (int): Layer number of MLP style layers. Default: 8.
|
||||
channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.
|
||||
narrow (float): Narrow ratio for channels. Default: 1.0.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, out_size, num_style_feat=512, num_mlp=8, channel_multiplier=2, narrow=1
|
||||
):
|
||||
super(StyleGAN2GeneratorClean, self).__init__()
|
||||
# Style MLP layers
|
||||
self.num_style_feat = num_style_feat
|
||||
style_mlp_layers = [NormStyleCode()]
|
||||
for i in range(num_mlp):
|
||||
style_mlp_layers.extend(
|
||||
[
|
||||
nn.Linear(num_style_feat, num_style_feat, bias=True),
|
||||
nn.LeakyReLU(negative_slope=0.2, inplace=True),
|
||||
]
|
||||
)
|
||||
self.style_mlp = nn.Sequential(*style_mlp_layers)
|
||||
# initialization
|
||||
default_init_weights(
|
||||
self.style_mlp,
|
||||
scale=1,
|
||||
bias_fill=0,
|
||||
a=0.2,
|
||||
mode="fan_in",
|
||||
nonlinearity="leaky_relu",
|
||||
)
|
||||
|
||||
# channel list
|
||||
channels = {
|
||||
"4": int(512 * narrow),
|
||||
"8": int(512 * narrow),
|
||||
"16": int(512 * narrow),
|
||||
"32": int(512 * narrow),
|
||||
"64": int(256 * channel_multiplier * narrow),
|
||||
"128": int(128 * channel_multiplier * narrow),
|
||||
"256": int(64 * channel_multiplier * narrow),
|
||||
"512": int(32 * channel_multiplier * narrow),
|
||||
"1024": int(16 * channel_multiplier * narrow),
|
||||
}
|
||||
self.channels = channels
|
||||
|
||||
self.constant_input = ConstantInput(channels["4"], size=4)
|
||||
self.style_conv1 = StyleConv(
|
||||
channels["4"],
|
||||
channels["4"],
|
||||
kernel_size=3,
|
||||
num_style_feat=num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode=None,
|
||||
)
|
||||
self.to_rgb1 = ToRGB(channels["4"], num_style_feat, upsample=False)
|
||||
|
||||
self.log_size = int(math.log(out_size, 2))
|
||||
self.num_layers = (self.log_size - 2) * 2 + 1
|
||||
self.num_latent = self.log_size * 2 - 2
|
||||
|
||||
self.style_convs = nn.ModuleList()
|
||||
self.to_rgbs = nn.ModuleList()
|
||||
self.noises = nn.Module()
|
||||
|
||||
in_channels = channels["4"]
|
||||
# noise
|
||||
for layer_idx in range(self.num_layers):
|
||||
resolution = 2 ** ((layer_idx + 5) // 2)
|
||||
shape = [1, 1, resolution, resolution]
|
||||
self.noises.register_buffer(f"noise{layer_idx}", torch.randn(*shape))
|
||||
# style convs and to_rgbs
|
||||
for i in range(3, self.log_size + 1):
|
||||
out_channels = channels[f"{2**i}"]
|
||||
self.style_convs.append(
|
||||
StyleConv(
|
||||
in_channels,
|
||||
out_channels,
|
||||
kernel_size=3,
|
||||
num_style_feat=num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode="upsample",
|
||||
)
|
||||
)
|
||||
self.style_convs.append(
|
||||
StyleConv(
|
||||
out_channels,
|
||||
out_channels,
|
||||
kernel_size=3,
|
||||
num_style_feat=num_style_feat,
|
||||
demodulate=True,
|
||||
sample_mode=None,
|
||||
)
|
||||
)
|
||||
self.to_rgbs.append(ToRGB(out_channels, num_style_feat, upsample=True))
|
||||
in_channels = out_channels
|
||||
|
||||
def make_noise(self):
|
||||
"""Make noise for noise injection."""
|
||||
device = self.constant_input.weight.device
|
||||
noises = [torch.randn(1, 1, 4, 4, device=device)]
|
||||
|
||||
for i in range(3, self.log_size + 1):
|
||||
for _ in range(2):
|
||||
noises.append(torch.randn(1, 1, 2**i, 2**i, device=device))
|
||||
|
||||
return noises
|
||||
|
||||
def get_latent(self, x):
|
||||
return self.style_mlp(x)
|
||||
|
||||
def mean_latent(self, num_latent):
|
||||
latent_in = torch.randn(
|
||||
num_latent, self.num_style_feat, device=self.constant_input.weight.device
|
||||
)
|
||||
latent = self.style_mlp(latent_in).mean(0, keepdim=True)
|
||||
return latent
|
||||
|
||||
def forward(
|
||||
self,
|
||||
styles,
|
||||
input_is_latent=False,
|
||||
noise=None,
|
||||
randomize_noise=True,
|
||||
truncation=1,
|
||||
truncation_latent=None,
|
||||
inject_index=None,
|
||||
return_latents=False,
|
||||
):
|
||||
"""Forward function for StyleGAN2GeneratorClean.
|
||||
Args:
|
||||
styles (list[Tensor]): Sample codes of styles.
|
||||
input_is_latent (bool): Whether input is latent style. Default: False.
|
||||
noise (Tensor | None): Input noise or None. Default: None.
|
||||
randomize_noise (bool): Randomize noise, used when 'noise' is False. Default: True.
|
||||
truncation (float): The truncation ratio. Default: 1.
|
||||
truncation_latent (Tensor | None): The truncation latent tensor. Default: None.
|
||||
inject_index (int | None): The injection index for mixing noise. Default: None.
|
||||
return_latents (bool): Whether to return style latents. Default: False.
|
||||
"""
|
||||
# style codes -> latents with Style MLP layer
|
||||
if not input_is_latent:
|
||||
styles = [self.style_mlp(s) for s in styles]
|
||||
# noises
|
||||
if noise is None:
|
||||
if randomize_noise:
|
||||
noise = [None] * self.num_layers # for each style conv layer
|
||||
else: # use the stored noise
|
||||
noise = [
|
||||
getattr(self.noises, f"noise{i}") for i in range(self.num_layers)
|
||||
]
|
||||
# style truncation
|
||||
if truncation < 1:
|
||||
style_truncation = []
|
||||
for style in styles:
|
||||
style_truncation.append(
|
||||
truncation_latent + truncation * (style - truncation_latent)
|
||||
)
|
||||
styles = style_truncation
|
||||
# get style latents with injection
|
||||
if len(styles) == 1:
|
||||
inject_index = self.num_latent
|
||||
|
||||
if styles[0].ndim < 3:
|
||||
# repeat latent code for all the layers
|
||||
latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
|
||||
else: # used for encoder with different latent code for each layer
|
||||
latent = styles[0]
|
||||
elif len(styles) == 2: # mixing noises
|
||||
if inject_index is None:
|
||||
inject_index = random.randint(1, self.num_latent - 1)
|
||||
latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1)
|
||||
latent2 = (
|
||||
styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1)
|
||||
)
|
||||
latent = torch.cat([latent1, latent2], 1)
|
||||
|
||||
# main generation
|
||||
out = self.constant_input(latent.shape[0])
|
||||
out = self.style_conv1(out, latent[:, 0], noise=noise[0])
|
||||
skip = self.to_rgb1(out, latent[:, 1])
|
||||
|
||||
i = 1
|
||||
for conv1, conv2, noise1, noise2, to_rgb in zip(
|
||||
self.style_convs[::2],
|
||||
self.style_convs[1::2],
|
||||
noise[1::2],
|
||||
noise[2::2],
|
||||
self.to_rgbs,
|
||||
):
|
||||
out = conv1(out, latent[:, i], noise=noise1)
|
||||
out = conv2(out, latent[:, i + 1], noise=noise2)
|
||||
skip = to_rgb(out, latent[:, i + 2], skip) # feature back to the rgb space
|
||||
i += 2
|
||||
|
||||
image = skip
|
||||
|
||||
if return_latents:
|
||||
return image, latent
|
||||
else:
|
||||
return image, None
|
194
comfy_extras/chainner_models/architecture/face/upfirdn2d.py
Normal file
194
comfy_extras/chainner_models/architecture/face/upfirdn2d.py
Normal file
@@ -0,0 +1,194 @@
|
||||
# pylint: skip-file
|
||||
# type: ignore
|
||||
# modify from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.py # noqa:E501
|
||||
|
||||
import os
|
||||
|
||||
import torch
|
||||
from torch.autograd import Function
|
||||
from torch.nn import functional as F
|
||||
|
||||
upfirdn2d_ext = None
|
||||
|
||||
|
||||
class UpFirDn2dBackward(Function):
|
||||
@staticmethod
|
||||
def forward(
|
||||
ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size
|
||||
):
|
||||
up_x, up_y = up
|
||||
down_x, down_y = down
|
||||
g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad
|
||||
|
||||
grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1)
|
||||
|
||||
grad_input = upfirdn2d_ext.upfirdn2d(
|
||||
grad_output,
|
||||
grad_kernel,
|
||||
down_x,
|
||||
down_y,
|
||||
up_x,
|
||||
up_y,
|
||||
g_pad_x0,
|
||||
g_pad_x1,
|
||||
g_pad_y0,
|
||||
g_pad_y1,
|
||||
)
|
||||
grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3])
|
||||
|
||||
ctx.save_for_backward(kernel)
|
||||
|
||||
pad_x0, pad_x1, pad_y0, pad_y1 = pad
|
||||
|
||||
ctx.up_x = up_x
|
||||
ctx.up_y = up_y
|
||||
ctx.down_x = down_x
|
||||
ctx.down_y = down_y
|
||||
ctx.pad_x0 = pad_x0
|
||||
ctx.pad_x1 = pad_x1
|
||||
ctx.pad_y0 = pad_y0
|
||||
ctx.pad_y1 = pad_y1
|
||||
ctx.in_size = in_size
|
||||
ctx.out_size = out_size
|
||||
|
||||
return grad_input
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, gradgrad_input):
|
||||
(kernel,) = ctx.saved_tensors
|
||||
|
||||
gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1)
|
||||
|
||||
gradgrad_out = upfirdn2d_ext.upfirdn2d(
|
||||
gradgrad_input,
|
||||
kernel,
|
||||
ctx.up_x,
|
||||
ctx.up_y,
|
||||
ctx.down_x,
|
||||
ctx.down_y,
|
||||
ctx.pad_x0,
|
||||
ctx.pad_x1,
|
||||
ctx.pad_y0,
|
||||
ctx.pad_y1,
|
||||
)
|
||||
# gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0],
|
||||
# ctx.out_size[1], ctx.in_size[3])
|
||||
gradgrad_out = gradgrad_out.view(
|
||||
ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1]
|
||||
)
|
||||
|
||||
return gradgrad_out, None, None, None, None, None, None, None, None
|
||||
|
||||
|
||||
class UpFirDn2d(Function):
|
||||
@staticmethod
|
||||
def forward(ctx, input, kernel, up, down, pad):
|
||||
up_x, up_y = up
|
||||
down_x, down_y = down
|
||||
pad_x0, pad_x1, pad_y0, pad_y1 = pad
|
||||
|
||||
kernel_h, kernel_w = kernel.shape
|
||||
_, channel, in_h, in_w = input.shape
|
||||
ctx.in_size = input.shape
|
||||
|
||||
input = input.reshape(-1, in_h, in_w, 1)
|
||||
|
||||
ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1]))
|
||||
|
||||
out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
|
||||
out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
|
||||
ctx.out_size = (out_h, out_w)
|
||||
|
||||
ctx.up = (up_x, up_y)
|
||||
ctx.down = (down_x, down_y)
|
||||
ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1)
|
||||
|
||||
g_pad_x0 = kernel_w - pad_x0 - 1
|
||||
g_pad_y0 = kernel_h - pad_y0 - 1
|
||||
g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1
|
||||
g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1
|
||||
|
||||
ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1)
|
||||
|
||||
out = upfirdn2d_ext.upfirdn2d(
|
||||
input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
|
||||
)
|
||||
# out = out.view(major, out_h, out_w, minor)
|
||||
out = out.view(-1, channel, out_h, out_w)
|
||||
|
||||
return out
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, grad_output):
|
||||
kernel, grad_kernel = ctx.saved_tensors
|
||||
|
||||
grad_input = UpFirDn2dBackward.apply(
|
||||
grad_output,
|
||||
kernel,
|
||||
grad_kernel,
|
||||
ctx.up,
|
||||
ctx.down,
|
||||
ctx.pad,
|
||||
ctx.g_pad,
|
||||
ctx.in_size,
|
||||
ctx.out_size,
|
||||
)
|
||||
|
||||
return grad_input, None, None, None, None
|
||||
|
||||
|
||||
def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)):
|
||||
if input.device.type == "cpu":
|
||||
out = upfirdn2d_native(
|
||||
input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1]
|
||||
)
|
||||
else:
|
||||
out = UpFirDn2d.apply(
|
||||
input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1])
|
||||
)
|
||||
|
||||
return out
|
||||
|
||||
|
||||
def upfirdn2d_native(
|
||||
input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
|
||||
):
|
||||
_, channel, in_h, in_w = input.shape
|
||||
input = input.reshape(-1, in_h, in_w, 1)
|
||||
|
||||
_, in_h, in_w, minor = input.shape
|
||||
kernel_h, kernel_w = kernel.shape
|
||||
|
||||
out = input.view(-1, in_h, 1, in_w, 1, minor)
|
||||
out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
|
||||
out = out.view(-1, in_h * up_y, in_w * up_x, minor)
|
||||
|
||||
out = F.pad(
|
||||
out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
|
||||
)
|
||||
out = out[
|
||||
:,
|
||||
max(-pad_y0, 0) : out.shape[1] - max(-pad_y1, 0),
|
||||
max(-pad_x0, 0) : out.shape[2] - max(-pad_x1, 0),
|
||||
:,
|
||||
]
|
||||
|
||||
out = out.permute(0, 3, 1, 2)
|
||||
out = out.reshape(
|
||||
[-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
|
||||
)
|
||||
w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
|
||||
out = F.conv2d(out, w)
|
||||
out = out.reshape(
|
||||
-1,
|
||||
minor,
|
||||
in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
|
||||
in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
|
||||
)
|
||||
out = out.permute(0, 2, 3, 1)
|
||||
out = out[:, ::down_y, ::down_x, :]
|
||||
|
||||
out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1
|
||||
out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1
|
||||
|
||||
return out.view(-1, channel, out_h, out_w)
|
Reference in New Issue
Block a user