programing

파이썬 내부 클래스의 목적은 무엇입니까?

nasanasas 2020. 9. 16. 07:46
반응형

파이썬 내부 클래스의 목적은 무엇입니까?


파이썬의 내부 / 중첩 클래스가 나를 혼란스럽게합니다. 그들 없이는 달성 할 수없는 것이 있습니까? 그렇다면 그게 뭔데?


http://www.geekinterview.com/question_details/64739 에서 인용 :

내부 클래스의 장점 :

  • 클래스의 논리적 그룹화 : 클래스가 다른 하나의 클래스에만 유용하면 해당 클래스에 포함하고 두 클래스를 함께 유지하는 것이 논리적입니다. 이러한 "도우미 클래스"를 중첩하면 패키지가 더욱 간소화됩니다.
  • 캡슐화 증가 : B가 그렇지 않으면 private으로 선언 될 A의 멤버에 액세스해야하는 두 개의 최상위 클래스 A와 B를 고려하십시오. 클래스 AA 내에서 클래스 B를 숨기면 멤버는 private으로 선언되고 B는 액세스 할 수 있습니다. 또한 B 자체는 외부 세계에서 숨길 수 있습니다.
  • 더 읽기 쉽고 관리하기 쉬운 코드 : 최상위 클래스 내에 작은 클래스를 중첩하면 코드가 사용되는 위치에 더 가깝게 배치됩니다.

가장 큰 장점은 조직입니다. 내부 클래스로 수행 할 수있는 모든 것은 그것들 없이도 수행 할 수 있습니다.


그들 없이는 달성 할 수없는 것이 있습니까?

아니요. 일반적으로 최상위 수준에서 클래스를 정의한 다음 이에 대한 참조를 외부 클래스에 복사하는 것과 절대적으로 동일합니다.

중첩 된 클래스가 '허용'되는 특별한 이유가 있다고 생각하지 않습니다. 명시 적으로 '허용'하는 것도 의미가 없습니다.

외부 / 소유자 객체의 수명주기 내에 존재하고 항상 외부 클래스의 인스턴스에 대한 참조를 갖고있는 클래스를 찾고 있다면 (자바처럼 내부 클래스) Python의 중첩 클래스는 그런 것이 아닙니다. 하지만 다음 과 같이 해킹 할 수 있습니다 .

import weakref, new

class innerclass(object):
    """Descriptor for making inner classes.

    Adds a property 'owner' to the inner class, pointing to the outer
    owner instance.
    """

    # Use a weakref dict to memoise previous results so that
    # instance.Inner() always returns the same inner classobj.
    #
    def __init__(self, inner):
        self.inner= inner
        self.instances= weakref.WeakKeyDictionary()

    # Not thread-safe - consider adding a lock.
    #
    def __get__(self, instance, _):
        if instance is None:
            return self.inner
        if instance not in self.instances:
            self.instances[instance]= new.classobj(
                self.inner.__name__, (self.inner,), {'owner': instance}
            )
        return self.instances[instance]


# Using an inner class
#
class Outer(object):
    @innerclass
    class Inner(object):
        def __repr__(self):
            return '<%s.%s inner object of %r>' % (
                self.owner.__class__.__name__,
                self.__class__.__name__,
                self.owner
            )

>>> o1= Outer()
>>> o2= Outer()
>>> i1= o1.Inner()
>>> i1
<Outer.Inner inner object of <__main__.Outer object at 0x7fb2cd62de90>>
>>> isinstance(i1, Outer.Inner)
True
>>> isinstance(i1, o1.Inner)
True
>>> isinstance(i1, o2.Inner)
False

(이것은 Python 2.6 및 3.0에서 새로 추가 된 클래스 데코레이터를 사용합니다. 그렇지 않으면 클래스 정의 뒤에 "Inner = innerclass (Inner)"라고 말해야합니다.)


There's something you need to wrap your head around to be able to understand this. In most languages, class definitions are directives to the compiler. That is, the class is created before the program is ever run. In python, all statements are executable. That means that this statement:

class foo(object):
    pass

is a statement that is executed at runtime just like this one:

x = y + z

This means that not only can you create classes within other classes, you can create classes anywhere you want to. Consider this code:

def foo():
    class bar(object):
        ...
    z = bar()

Thus, the idea of an "inner class" isn't really a language construct; it's a programmer construct. Guido has a very good summary of how this came about here. But essentially, the basic idea is this simplifies the language's grammar.


Nesting classes within classes:

  • Nested classes bloat the class definition making it harder to see whats going on.

  • Nested classes can create coupling that would make testing more difficult.

  • In Python you can put more than one class in a file/module, unlike Java, so the class still remains close to top level class and could even have the class name prefixed with an "_" to help signify that others shouldn't be using it.

The place where nested classes can prove useful is within functions

def some_func(a, b, c):
   class SomeClass(a):
      def some_method(self):
         return b
   SomeClass.__doc__ = c
   return SomeClass

The class captures the values from the function allowing you to dynamically create a class like template metaprogramming in C++


I understand the arguments against nested classes, but there is a case for using them in some occasions. Imagine I'm creating a doubly-linked list class, and I need to create a node class for maintaing the nodes. I have two choices, create Node class inside the DoublyLinkedList class, or create the Node class outside the DoublyLinkedList class. I prefer the first choice in this case, because the Node class is only meaningful inside the DoublyLinkedList class. While there's no hiding/encapsulation benefit, there is a grouping benefit of being able to say the Node class is part of the DoublyLinkedList class.


I have used Python's inner classes to create deliberately buggy subclasses within unittest functions (i.e. inside def test_something():) in order to get closer to 100% test coverage (e.g. testing very rarely triggered logging statements by overriding some methods).

In retrospect it's similar to Ed's answer https://stackoverflow.com/a/722036/1101109

Such inner classes should go out of scope and be ready for garbage collection once all references to them have been removed. For instance, take the following inner.py file:

class A(object):
    pass

def scope():
    class Buggy(A):
        """Do tests or something"""
    assert isinstance(Buggy(), A)

I get the following curious results under OSX Python 2.7.6:

>>> from inner import A, scope
>>> A.__subclasses__()
[]
>>> scope()
>>> A.__subclasses__()
[<class 'inner.Buggy'>]
>>> del A, scope
>>> from inner import A
>>> A.__subclasses__()
[<class 'inner.Buggy'>]
>>> del A
>>> import gc
>>> gc.collect()
0
>>> gc.collect()  # Yes I needed to call the gc twice, seems reproducible
3
>>> from inner import A
>>> A.__subclasses__()
[]

Hint - Don't go on and try doing this with Django models, which seemed to keep other (cached?) references to my buggy classes.

So in general, I wouldn't recommend using inner classes for this kind of purpose unless you really do value that 100% test coverage and can't use other methods. Though I think it's nice to be aware that if you use the __subclasses__(), that it can sometimes get polluted by inner classes. Either way if you followed this far, I think we're pretty deep into Python at this point, private dunderscores and all.


The main use case I use this for is the prevent proliferation of small modules and to prevent namespace pollution when separate modules are not needed. If I am extending an existing class, but that existing class must reference another subclass that should always be coupled to it. For example, I may have a utils.py module that has many helper classes in it, that aren't necessarily coupled together, but I want to reinforce coupling for some of those helper classes. For example, when I implement https://stackoverflow.com/a/8274307/2718295

:utils.py:

import json, decimal

class Helper1(object):
    pass

class Helper2(object):
    pass

# Here is the notorious JSONEncoder extension to serialize Decimals to JSON floats
class DecimalJSONEncoder(json.JSONEncoder):

    class _repr_decimal(float): # Because float.__repr__ cannot be monkey patched
        def __init__(self, obj):
            self._obj = obj
        def __repr__(self):
            return '{:f}'.format(self._obj)

    def default(self, obj): # override JSONEncoder.default
        if isinstance(obj, decimal.Decimal):
            return self._repr_decimal(obj)
        # else
        super(self.__class__, self).default(obj)
        # could also have inherited from object and used return json.JSONEncoder.default(self, obj) 

Then we can:

>>> from utils import DecimalJSONEncoder
>>> import json, decimal
>>> json.dumps({'key1': decimal.Decimal('1.12345678901234'), 
... 'key2':'strKey2Value'}, cls=DecimalJSONEncoder)
{"key2": "key2_value", "key_1": 1.12345678901234}

Of course, we could have eschewed inheriting json.JSONEnocder altogether and just override default():

:

import decimal, json

class Helper1(object):
    pass

def json_encoder_decimal(obj):
    class _repr_decimal(float):
        ...

    if isinstance(obj, decimal.Decimal):
        return _repr_decimal(obj)

    return json.JSONEncoder(obj)


>>> json.dumps({'key1': decimal.Decimal('1.12345678901234')}, default=json_decimal_encoder)
'{"key1": 1.12345678901234}'

But sometimes just for convention, you want utils to be composed of classes for extensibility.

Here's another use-case: I want a factory for mutables in my OuterClass without having to invoke copy:

class OuterClass(object):

    class DTemplate(dict):
        def __init__(self):
            self.update({'key1': [1,2,3],
                'key2': {'subkey': [4,5,6]})


    def __init__(self):
        self.outerclass_dict = {
            'outerkey1': self.DTemplate(),
            'outerkey2': self.DTemplate()}



obj = OuterClass()
obj.outerclass_dict['outerkey1']['key2']['subkey'].append(4)
assert obj.outerclass_dict['outerkey2']['key2']['subkey'] == [4,5,6]

I prefer this pattern over the @staticmethod decorator you would otherwise use for a factory function.

참고URL : https://stackoverflow.com/questions/719705/what-is-the-purpose-of-pythons-inner-classes

반응형