검색
아하에서 찾은 598건의 질문
- 생활꿀팁생활Q. [파이썬] 다중 리스트를 이용해서 구구단을 짠다 치면 어떤 게 효율적인가요?가정 : x와 y의 이중 for문이 되어 있다. [][]1. 첫째 리스트의 인덱스를 먼저 증가시킨다.[0][0] ->[1][0]->[2][0]->.....2. 둘째 리스트의 인덱스를 먼저 증가시킨다.[0][0] ->[0][1]->[0][2]->.....
- 생활꿀팁생활Q. 엑셀수식을 어떻게 많들어야 할까요?A값과 B값을 비교 C 값과 D값을 비교 해서 만약 A값이 B값보다 크면 X라는 문자 출력 및 A+B값 실행 C값이D값보다 작으면 Y라는 문자 출력 및 C-B값 실행 A값이 B값보다 크면서 C값이D값보다 작으면 (위의 두조건을 동시에 만족할경우) Z라는 문자 출력및 E라는 값을 출력 위의 두조건을 동시에 만족하지 않는다면 숫자 0 값을 출력 위와 같은 수식을 짜려고 합니다 IF()함수를 이용해서 짜려고 하는데 2개의 경우의 수만 지원해서 위와 같이 여러조건을 비교해서 값을 출력하는 수식은 만들지 못하겠네요 고수님들의 답 부탁드리겠습니다.
- 생활꿀팁생활Q. X, Y, Z세대라는 뜻의 약자가 뭘 의미하나요?광고를 보다보니 X, Y, Z 세대라는 표현을 보았습니다.저는 40대라 X세대에 포함된다고 보는데요.각자의 스펠링이 뜻하는 것이 있는건가요?X세대이면서도 "X"가 뭘 뜻하는지도 모르고 살았네요.궁금증 풀어주실분?
- 생활꿀팁생활Q. 파이썬 코드 idle python 에서 오류?안녕하세요 제가 코딩을 공부하던중 http://solarisailab.com/archives/486 의 코드를 사용하였습니다코드는""" TensorFlow translation of the torch example found here (written by SeanNaren). https://github.com/SeanNaren/TorchQLearningExample Original keras example found here (written by Eder Santana). https://gist.github.com/EderSantana/c7222daa328f0e885093#file-qlearn-py-L164 The agent plays a game of catch. Fruits drop from the sky and the agent can choose the actions left/stay/right to catch the fruit before it reaches the ground."""import tensorflow.compat.v1 as tftf.disablev2behavior()import numpy as npimport randomimport mathimport os# Parametersepsilon = 1 # The probability of choosing a random action (in training). This decays as iterations increase. (0 to 1)epsilonMinimumValue = 0.001 # The minimum value we want epsilon to reach in training. (0 to 1)nbActions = 3 # The number of actions. Since we only have left/stay/right that means 3 actions.epoch = 1001 # The number of games we want the system to run for.hiddenSize = 100 # Number of neurons in the hidden layers.maxMemory = 500 # How large should the memory be (where it stores its past experiences).batchSize = 50 # The mini-batch size for training. Samples are randomly taken from memory till mini-batch size.gridSize = 10 # The size of the grid that the agent is going to play the game on.nbStates = gridSize * gridSize # We eventually flatten to a 1d tensor to feed the network.discount = 0.9 # The discount is used to force the network to choose states that lead to the reward quicker (0 to 1) learningRate = 0.2 # Learning Rate for Stochastic Gradient Descent (our optimizer).# Create the base model.X = tf.placeholder(tf.float32, [None, nbStates])W1 = tf.Variable(tf.truncated_normal([nbStates, hiddenSize], stddev=1.0 / math.sqrt(float(nbStates))))b1 = tf.Variable(tf.truncated_normal([hiddenSize], stddev=0.01)) input_layer = tf.nn.relu(tf.matmul(X, W1) + b1)W2 = tf.Variable(tf.truncated_normal([hiddenSize, hiddenSize],stddev=1.0 / math.sqrt(float(hiddenSize))))b2 = tf.Variable(tf.truncated_normal([hiddenSize], stddev=0.01))hiddenlayer = tf.nn.relu(tf.matmul(inputlayer, W2) + b2)W3 = tf.Variable(tf.truncated_normal([hiddenSize, nbActions],stddev=1.0 / math.sqrt(float(hiddenSize))))b3 = tf.Variable(tf.truncated_normal([nbActions], stddev=0.01))outputlayer = tf.matmul(hiddenlayer, W3) + b3# True labelsY = tf.placeholder(tf.float32, [None, nbActions])# Mean squared error cost functioncost = tf.reducesum(tf.square(Y-outputlayer)) / (2*batchSize)# Stochastic Gradient Decent Optimizeroptimizer = tf.train.GradientDescentOptimizer(learningRate).minimize(cost)# Helper function: Chooses a random value between the two boundaries.def randf(s, e): return (float(random.randrange(0, (e - s) * 9999)) / 10000) + s;# The environment: Handles interactions and contains the state of the environmentclass CatchEnvironment(): def init(self, gridSize): self.gridSize = gridSize self.nbStates = self.gridSize * self.gridSize self.state = np.empty(3, dtype = np.uint8) # Returns the state of the environment. def observe(self): canvas = self.drawState() canvas = np.reshape(canvas, (-1,self.nbStates)) return canvas def drawState(self): canvas = np.zeros((self.gridSize, self.gridSize)) canvas[self.state[0]-1, self.state[1]-1] = 1 # Draw the fruit. # Draw the basket. The basket takes the adjacent two places to the position of basket. canvas[self.gridSize-1, self.state[2] -1 - 1] = 1 canvas[self.gridSize-1, self.state[2] -1] = 1 canvas[self.gridSize-1, self.state[2] -1 + 1] = 1 return canvas # Resets the environment. Randomly initialise the fruit position (always at the top to begin with) and bucket. def reset(self): initialFruitColumn = random.randrange(1, self.gridSize + 1) initialBucketPosition = random.randrange(2, self.gridSize + 1 - 1) self.state = np.array([1, initialFruitColumn, initialBucketPosition]) return self.getState() def getState(self): stateInfo = self.state fruit_row = stateInfo[0] fruit_col = stateInfo[1] basket = stateInfo[2] return fruitrow, fruitcol, basket # Returns the award that the agent has gained for being in the current environment state. def getReward(self): fruitRow, fruitColumn, basket = self.getState() if (fruitRow == self.gridSize - 1): # If the fruit has reached the bottom. if (abs(fruitColumn - basket) <= 1): # Check if the basket caught the fruit. return 1 else: return -1 else: return 0 def isGameOver(self): if (self.state[0] == self.gridSize - 1): return True else: return False def updateState(self, action): if (action == 1): action = -1 elif (action == 2): action = 0 else: action = 1 fruitRow, fruitColumn, basket = self.getState() newBasket = min(max(2, basket + action), self.gridSize - 1) # The min/max prevents the basket from moving out of the grid. fruitRow = fruitRow + 1 # The fruit is falling by 1 every action. self.state = np.array([fruitRow, fruitColumn, newBasket]) #Action can be 1 (move left) or 2 (move right) def act(self, action): self.updateState(action) reward = self.getReward() gameOver = self.isGameOver() return self.observe(), reward, gameOver, self.getState() # For purpose of the visual, I also return the state.# The memory: Handles the internal memory that we add experiences that occur based on agent's actions,# and creates batches of experiences based on the mini-batch size for training.class ReplayMemory: def init(self, gridSize, maxMemory, discount): self.maxMemory = maxMemory self.gridSize = gridSize self.nbStates = self.gridSize * self.gridSize self.discount = discount canvas = np.zeros((self.gridSize, self.gridSize)) canvas = np.reshape(canvas, (-1,self.nbStates)) self.inputState = np.empty((self.maxMemory, 100), dtype = np.float32) self.actions = np.zeros(self.maxMemory, dtype = np.uint8) self.nextState = np.empty((self.maxMemory, 100), dtype = np.float32) self.gameOver = np.empty(self.maxMemory, dtype = np.bool) self.rewards = np.empty(self.maxMemory, dtype = np.int8) self.count = 0 self.current = 0 # Appends the experience to the memory. def remember(self, currentState, action, reward, nextState, gameOver): self.actions[self.current] = action self.rewards[self.current] = reward self.inputState[self.current, ...] = currentState self.nextState[self.current, ...] = nextState self.gameOver[self.current] = gameOver self.count = max(self.count, self.current + 1) self.current = (self.current + 1) % self.maxMemory def getBatch(self, model, batchSize, nbActions, nbStates, sess, X): # We check to see if we have enough memory inputs to make an entire batch, if not we create the biggest # batch we can (at the beginning of training we will not have enough experience to fill a batch). memoryLength = self.count chosenBatchSize = min(batchSize, memoryLength) inputs = np.zeros((chosenBatchSize, nbStates)) targets = np.zeros((chosenBatchSize, nbActions)) # Fill the inputs and targets up. for i in xrange(chosenBatchSize): if memoryLength == 1: memoryLength = 2 # Choose a random memory experience to add to the batch. randomIndex = random.randrange(1, memoryLength) current_inputState = np.reshape(self.inputState[randomIndex], (1, 100)) target = sess.run(model, feeddict={X: currentinputState}) current_nextState = np.reshape(self.nextState[randomIndex], (1, 100)) currentoutputs = sess.run(model, feeddict={X: current_nextState}) # Gives us Q_sa, the max q for the next state. nextStateMaxQ = np.amax(current_outputs) if (self.gameOver[randomIndex] == True): target[0, [self.actions[randomIndex]-1]] = self.rewards[randomIndex] else: # reward + discount(gamma) * max_a' Q(s',a') # We are setting the Q-value for the action to r + gamma*max a' Q(s', a'). The rest stay the same # to give an error of 0 for those outputs. target[0, [self.actions[randomIndex]-1]] = self.rewards[randomIndex] + self.discount * nextStateMaxQ # Update the inputs and targets. inputs[i] = current_inputState targets[i] = target return inputs, targets def main(_): print("Training new model") # Define Environment env = CatchEnvironment(gridSize) # Define Replay Memory memory = ReplayMemory(gridSize, maxMemory, discount) # Add ops to save and restore all the variables. saver = tf.train.Saver() winCount = 0 with tf.Session() as sess: tf.initializeallvariables().run() for i in xrange(epoch): # Initialize the environment. err = 0 env.reset() isGameOver = False # The initial state of the environment. currentState = env.observe() while (isGameOver != True): action = -9999 # action initilization # Decides if we should choose a random action, or an action from the policy network. global epsilon if (randf(0, 1) <= epsilon): action = random.randrange(1, nbActions+1) else: # Forward the current state through the network. q = sess.run(outputlayer, feeddict={X: currentState}) # Find the max index (the chosen action). index = q.argmax() action = index + 1 # Decay the epsilon by multiplying by 0.999, not allowing it to go below a certain threshold. if (epsilon > epsilonMinimumValue): epsilon = epsilon * 0.999 nextState, reward, gameOver, stateInfo = env.act(action) if (reward == 1): winCount = winCount + 1 memory.remember(currentState, action, reward, nextState, gameOver) # Update the current state and if the game is over. currentState = nextState isGameOver = gameOver # We get a batch of training data to train the model. inputs, targets = memory.getBatch(output_layer, batchSize, nbActions, nbStates, sess, X) # Train the network which returns the error. , loss = sess.run([optimizer, cost], feeddict={X: inputs, Y: targets}) err = err + loss print("Epoch " + str(i) + ": err = " + str(err) + ": Win count = " + str(winCount) + " Win ratio = " + str(float(winCount)/float(i+1)*100)) # Save the variables to disk. save_path = saver.save(sess, os.getcwd()+"/model.ckpt") print("Model saved in file: %s" % save_path)if name == 'main': tf.app.run()""" TensorFlow translation of the torch example found here (written by SeanNaren). https://github.com/SeanNaren/TorchQLearningExample Original keras example found here (written by Eder Santana). https://gist.github.com/EderSantana/c7222daa328f0e885093#file-qlearn-py-L164 The agent plays a game of catch. Fruits drop from the sky and the agent can choose the actions left/stay/right to catch the fruit before it reaches the ground."""import tensorflow.compat.v1 as tftf.disablev2behavior()import numpy as npimport randomimport mathimport os# Parametersepsilon = 1 # The probability of choosing a random action (in training). This decays as iterations increase. (0 to 1)epsilonMinimumValue = 0.001 # The minimum value we want epsilon to reach in training. (0 to 1)nbActions = 3 # The number of actions. Since we only have left/stay/right that means 3 actions.epoch = 1001 # The number of games we want the system to run for.hiddenSize = 100 # Number of neurons in the hidden layers.maxMemory = 500 # How large should the memory be (where it stores its past experiences).batchSize = 50 # The mini-batch size for training. Samples are randomly taken from memory till mini-batch size.gridSize = 10 # The size of the grid that the agent is going to play the game on.nbStates = gridSize * gridSize # We eventually flatten to a 1d tensor to feed the network.discount = 0.9 # The discount is used to force the network to choose states that lead to the reward quicker (0 to 1) learningRate = 0.2 # Learning Rate for Stochastic Gradient Descent (our optimizer).# Create the base model.X = tf.placeholder(tf.float32, [None, nbStates])W1 = tf.Variable(tf.truncated_normal([nbStates, hiddenSize], stddev=1.0 / math.sqrt(float(nbStates))))b1 = tf.Variable(tf.truncated_normal([hiddenSize], stddev=0.01)) input_layer = tf.nn.relu(tf.matmul(X, W1) + b1)W2 = tf.Variable(tf.truncated_normal([hiddenSize, hiddenSize],stddev=1.0 / math.sqrt(float(hiddenSize))))b2 = tf.Variable(tf.truncated_normal([hiddenSize], stddev=0.01))hiddenlayer = tf.nn.relu(tf.matmul(inputlayer, W2) + b2)W3 = tf.Variable(tf.truncated_normal([hiddenSize, nbActions],stddev=1.0 / math.sqrt(float(hiddenSize))))b3 = tf.Variable(tf.truncated_normal([nbActions], stddev=0.01))outputlayer = tf.matmul(hiddenlayer, W3) + b3# True labelsY = tf.placeholder(tf.float32, [None, nbActions])# Mean squared error cost functioncost = tf.reducesum(tf.square(Y-outputlayer)) / (2*batchSize)# Stochastic Gradient Decent Optimizeroptimizer = tf.train.GradientDescentOptimizer(learningRate).minimize(cost)# Helper function: Chooses a random value between the two boundaries.def randf(s, e): return (float(random.randrange(0, (e - s) * 9999)) / 10000) + s;# The environment: Handles interactions and contains the state of the environmentclass CatchEnvironment(): def init(self, gridSize): self.gridSize = gridSize self.nbStates = self.gridSize * self.gridSize self.state = np.empty(3, dtype = np.uint8) # Returns the state of the environment. def observe(self): canvas = self.drawState() canvas = np.reshape(canvas, (-1,self.nbStates)) return canvas def drawState(self): canvas = np.zeros((self.gridSize, self.gridSize)) canvas[self.state[0]-1, self.state[1]-1] = 1 # Draw the fruit. # Draw the basket. The basket takes the adjacent two places to the position of basket. canvas[self.gridSize-1, self.state[2] -1 - 1] = 1 canvas[self.gridSize-1, self.state[2] -1] = 1 canvas[self.gridSize-1, self.state[2] -1 + 1] = 1 return canvas # Resets the environment. Randomly initialise the fruit position (always at the top to begin with) and bucket. def reset(self): initialFruitColumn = random.randrange(1, self.gridSize + 1) initialBucketPosition = random.randrange(2, self.gridSize + 1 - 1) self.state = np.array([1, initialFruitColumn, initialBucketPosition]) return self.getState() def getState(self): stateInfo = self.state fruit_row = stateInfo[0] fruit_col = stateInfo[1] basket = stateInfo[2] return fruitrow, fruitcol, basket # Returns the award that the agent has gained for being in the current environment state. def getReward(self): fruitRow, fruitColumn, basket = self.getState() if (fruitRow == self.gridSize - 1): # If the fruit has reached the bottom. if (abs(fruitColumn - basket) <= 1): # Check if the basket caught the fruit. return 1 else: return -1 else: return 0 def isGameOver(self): if (self.state[0] == self.gridSize - 1): return True else: return False def updateState(self, action): if (action == 1): action = -1 elif (action == 2): action = 0 else: action = 1 fruitRow, fruitColumn, basket = self.getState() newBasket = min(max(2, basket + action), self.gridSize - 1) # The min/max prevents the basket from moving out of the grid. fruitRow = fruitRow + 1 # The fruit is falling by 1 every action. self.state = np.array([fruitRow, fruitColumn, newBasket]) #Action can be 1 (move left) or 2 (move right) def act(self, action): self.updateState(action) reward = self.getReward() gameOver = self.isGameOver() return self.observe(), reward, gameOver, self.getState() # For purpose of the visual, I also return the state.# The memory: Handles the internal memory that we add experiences that occur based on agent's actions,# and creates batches of experiences based on the mini-batch size for training.class ReplayMemory: def init(self, gridSize, maxMemory, discount): self.maxMemory = maxMemory self.gridSize = gridSize self.nbStates = self.gridSize * self.gridSize self.discount = discount canvas = np.zeros((self.gridSize, self.gridSize)) canvas = np.reshape(canvas, (-1,self.nbStates)) self.inputState = np.empty((self.maxMemory, 100), dtype = np.float32) self.actions = np.zeros(self.maxMemory, dtype = np.uint8) self.nextState = np.empty((self.maxMemory, 100), dtype = np.float32) self.gameOver = np.empty(self.maxMemory, dtype = np.bool) self.rewards = np.empty(self.maxMemory, dtype = np.int8) self.count = 0 self.current = 0 # Appends the experience to the memory. def remember(self, currentState, action, reward, nextState, gameOver): self.actions[self.current] = action self.rewards[self.current] = reward self.inputState[self.current, ...] = currentState self.nextState[self.current, ...] = nextState self.gameOver[self.current] = gameOver self.count = max(self.count, self.current + 1) self.current = (self.current + 1) % self.maxMemory def getBatch(self, model, batchSize, nbActions, nbStates, sess, X): # We check to see if we have enough memory inputs to make an entire batch, if not we create the biggest # batch we can (at the beginning of training we will not have enough experience to fill a batch). memoryLength = self.count chosenBatchSize = min(batchSize, memoryLength) inputs = np.zeros((chosenBatchSize, nbStates)) targets = np.zeros((chosenBatchSize, nbActions)) # Fill the inputs and targets up. for i in xrange(chosenBatchSize): if memoryLength == 1: memoryLength = 2 # Choose a random memory experience to add to the batch. randomIndex = random.randrange(1, memoryLength) current_inputState = np.reshape(self.inputState[randomIndex], (1, 100)) target = sess.run(model, feeddict={X: currentinputState}) current_nextState = np.reshape(self.nextState[randomIndex], (1, 100)) currentoutputs = sess.run(model, feeddict={X: current_nextState}) # Gives us Q_sa, the max q for the next state. nextStateMaxQ = np.amax(current_outputs) if (self.gameOver[randomIndex] == True): target[0, [self.actions[randomIndex]-1]] = self.rewards[randomIndex] else: # reward + discount(gamma) * max_a' Q(s',a') # We are setting the Q-value for the action to r + gamma*max a' Q(s', a'). The rest stay the same # to give an error of 0 for those outputs. target[0, [self.actions[randomIndex]-1]] = self.rewards[randomIndex] + self.discount * nextStateMaxQ # Update the inputs and targets. inputs[i] = current_inputState targets[i] = target return inputs, targets def main(_): print("Training new model") # Define Environment env = CatchEnvironment(gridSize) # Define Replay Memory memory = ReplayMemory(gridSize, maxMemory, discount) # Add ops to save and restore all the variables. saver = tf.train.Saver() winCount = 0 with tf.Session() as sess: tf.initializeallvariables().run() for i in xrange(epoch): # Initialize the environment. err = 0 env.reset() isGameOver = False # The initial state of the environment. currentState = env.observe() while (isGameOver != True): action = -9999 # action initilization # Decides if we should choose a random action, or an action from the policy network. global epsilon if (randf(0, 1) <= epsilon): action = random.randrange(1, nbActions+1) else: # Forward the current state through the network. q = sess.run(outputlayer, feeddict={X: currentState}) # Find the max index (the chosen action). index = q.argmax() action = index + 1 # Decay the epsilon by multiplying by 0.999, not allowing it to go below a certain threshold. if (epsilon > epsilonMinimumValue): epsilon = epsilon * 0.999 nextState, reward, gameOver, stateInfo = env.act(action) if (reward == 1): winCount = winCount + 1 memory.remember(currentState, action, reward, nextState, gameOver) # Update the current state and if the game is over. currentState = nextState isGameOver = gameOver # We get a batch of training data to train the model. inputs, targets = memory.getBatch(output_layer, batchSize, nbActions, nbStates, sess, X) # Train the network which returns the error이. , loss = sess.run([optimizer, cost], feeddict={X: inputs, Y: targets}) err = err + loss print("Epoch " + str(i) + ": err = " + str(err) + ": Win count = " + str(winCount) + " Win ratio = " + str(float(winCount)/float(i+1)*100)) # Save the variables to disk. save_path = saver.save(sess, os.getcwd()+"/model.ckpt") print("Model saved in file: %s" % save_path)if name == 'main': tf.app.run() 입니다그런데 이런 오류가 생겼습니다WARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\tensorflowcore\python\compat\v2compat.py:65: disableresourcevariables (from tensorflow.python.ops.variablescope) is deprecated and will be removed in a future version.Instructions for updating:non-resource variables are not supported in the long termTraining new modelWARNING:tensorflow:From C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\tensorflowcore\python\util\tfshoulduse.py:198: initializeallvariables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.Instructions for updating:Use tf.globalvariablesinitializer instead.W0820 22:17:13.656675 9068 deprecation.py:323] From C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\tensorflowcore\python\util\tfshoulduse.py:198: initializeallvariables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.Instructions for updating:Use tf.globalvariablesinitializer instead.Traceback (most recent call last): File "C:\Windows\system32\python", line 267, in <module> tf.app.run() File "C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\tensorflowcore\python\platform\app.py", line 40, in run run(main=main, argv=argv, flagsparser=parseflagstolerateundef) File "C:\ProgramData\Anaconda3\envs\tens_2\lib\site-packages\absl\app.py", line 299, in run runmain(main, args) File "C:\ProgramData\Anaconda3\envs\tens2\lib\site-packages\absl\app.py", line 250, in run_main sys.exit(main(argv)) File "C:\Windows\system32\python", line 216, in main for i in xrange(epoch):NameError: name 'xrange' is not defined어떻게 해결해야 할까요?매우 길지만 해결해 주시면 감사하겠습니다 ㅠㅠ
- 생활꿀팁생활Q. 파이썬 코딩 질문입니다 매개변수 쪽#1015-42.py# two integer def functionimport randomdef add() : #1 add z = x + y print("%d + %d = %d" % (x,y,z))def compare() : #2 compare if x > y : print("%d > %d" % (x,y)) elif x < y : print("%d < %d" % (x,y)) else : print("%d = %d" % (x,y))def loopsum() : #3 sum #x = random.randrange(0,100) #y = random.randrange(0,100) if x > y : z = x x = y y = z hap = 0 for i in range() : hap += i print("%d ~ %d sum = %d" % (x,y,hap))#mainwhile True : x = random.randrange(0,100) y = random.randrange(0,100) print("--------------") print(" 1.add") print(" 2.compare") print(" 3.sum") print(" 4.exit") print("--------------") n = input("n : ") n = int(n) if n ==1 : add() elif n == 2 : compare() elif n == 3 : loopsum() elif n == 4 : print("end ----") break else : continue이대로 돌렸을경우 1 과 2를 눌렀을때엔 잘나오는데 3번 loopsum을 돌리면 안됩니다. 매개변수를 직접 넣어주면 되긴하는데, 1번이랑 2번은 잘되는데 왜 3번만 안되는지 궁금합니다.
- 생활꿀팁생활Q. 방파제가 왜 Y 모양으로 생겼는지 가르켜주세요.질문은 제목 그대로인데요.. 방파제가 보면 Y자 모양으로 생겼잖아요? 근데 사각형으로 방파제를 만들어도 되고 그런데 왜 Y자 모양으로 되있나요?궁금합니다!!!!!!!!!!!
- 구조조정고용·노동Q. 임직원 어학교육 HRD프로그램 기획시 자체 진행이 좋을까요, 아웃소싱이 좋을까요?안녕하세요!! 저는 이제 막 2년차 HRD꿈나무 입니다.대학교때는 교육학을 전공하였고, 복수전공으로는 경제학을 전공하여 현재 화학계열 대기업에서HRD업무를 맡고 있습니다.입사후에는 선배님들 업무 조력을 주로 했었고, 2년차에 접어든 지금 저한테 처음으로 임직원어학교육과 관련된 업무 오더가 떨어진 상태입니다.현재까지는 계속 Y사, M사, C사 등 대형 어학교육회사 혹은 컨설팅회사에 의뢰를 해서 진행을 했었는데, 문제는 제대로 된 관리도 진행되지 않고, 가장 중요한 것은 회사 차원에서 잡아놓은 MBO에 전혀근접을 못하고 있다는 것입니다.회사 마다 복불복이고, 정답은 없겠지만 아무래도 회사의 미래와 직결된 중요한 교육인만큼자체적으로 팀을 구성하여 진행하는것이 나을지, 기존처럼 아웃소싱이 나을지 전문가님들소중한 의견을 여쭙습니다.감사합니다.
- 취득세·등록세세금·세무Q. 2020년 취득세 개정내용 문의 드립니다.내년부터 부동산 취득하는 경우 6억원 초과 9억원 이하의 취득세율이 세분화 된다는데 현행 취득세 세율과 비교해서 매수가격 얼마까지가 매수인한테 유리한지, 취득가격이 얼마이상 초과해야 매수인한테 불리한지 비교 설명 부탁드립니다. 아울러 해당 구간 취득세 수식이 아래처럼 된다는데 수학?포기자라 이해가 잘 안됩니다. 아주 쉽게 예를 들어 설명도 같이 부탁드립니다. Y=2/3 X - 3 [Y: 세율, 소수점 이하 셋째자리에서 반올림, X: 취득당시가액/1억원]감사합니다.
- 생활꿀팁생활Q. 파이썬 format 함수 질문책에서소수점 표현하기y = 3.42134234>>>"{0:10.4f}" .format(y)' 3.4213'으로 나오는데 "{0:10.4f}"에서 10.4f가 무엇을 의미하는지 자세하게 알려주세요
- 생활꿀팁생활Q. 루비온레일즈 버전 업데이트 방법 질문드립니다.현재 회사에서 사용하고 있는 루비온레일즈 버전은ruby 1.8.7, rails 1.2.6, ubuntu14.04위와같이 굉장히 낮은 버전을 사용하고 있습니다.ruby 2.0으로 업데이트 하는 것은gem install rails -y 한줄 명령어로 간단하게 나와있지만, 그걸로 인해 파생되는 수없이 많은 에러로 서비스가 정상운영 되지 않을 것 같습니다.단계적으로 어떤 형태의 개발과 테스트가 이루어 져야 하는지 궁금합니다.현재 제공하고 있는 서비스는 wms로 굉장히 무겁습니다.